top of page

Search Results

58 items found for ""

  • Scrapping Data using Python | Akweidata

    < Back Scrapping Data using Python A Python application designed to generate a histogram depicting the frequency of articles published on Google News in 2022 concerning '@celebjets'. Background Created by then teenager Jack Sweeny in 2020, @celebjets (now suspended) was a twitter account that tracked the location of celebrities' jets. The account gained worldwide notoriety through 2021 and 2022 mainly due to Jack Sweeny's altercation with Elon Musk regarding privacy and safety concerns with tracking Elon's Jet. But more importantly the posts from the account brought to light conversations on the 'vanity-filled' lifestyle of celebrities and the significant CO2 footprints they leave with their obnoxious use of private jets. Problem Formulation, Decomposition and Abstraction With the given prompt at hand, we need to exhaustively understand the problem space in order to efficiently and effectively move from the undesired to the desired state of affairs. The problem at hand is not monolithic, thus, we need to break it down before conceptualizing any solutions. Breaking down the problem requires the employment of the Computational Thinking concept known as Decomposition . By separating the problem at hand into sub-problems, the task becomes more approachable as one is quickly able to see how possible conceptual frameworks (in the form of existing python commands) can be employed and knitted to solve the problem. However, before diving into decomposition, we need to understand that the prompt does not encapsulate the entire problem space. Key components of the problem space deal with the nature of the file news-celebjets.txt. How is the data organized? Where are the dates stated? What is the format of the dates? How can we work with this format? Thus, I ran the html code and manually viewed a sample (the first ten) of the articles to attain a brief idea of the nature of the data. My findings were as follows: The data is not primary but secondary data: some analysis has already been made. Data is very well structured: Article Cover Picture; Logo and name of Publisher; Title of Article (Hyperlinked); The Date List of Articles appears to consistently follow the structure stated above. The dates appear to have the same format through out: Month and day; example: Dec 14 Intuitively, from my findings, I have already applied the Computational Thinking concept of Patterns and Generalizations. By identifying the repeated structure of the list of articles, I wondered, can loops or some other iterative command assist me with extracting the dates? With these insights at hand, I attained a greater understanding of the problem space. Consequently, I employed the General to Specific decomposition technique The General to Specific technique involves breaking down a problem from a general perspective and then adding specific and more detailed components . For the given problem, as it is not open-ended and specific requirements were given, I found this technique to be the most appropriate. The results of my decomposition are as follows: General Problem: Analyze news articles and create a histogram representing the number of articles published per week Listed below with the alphabets a, b, c,d and e are the "definitions of the desired characteristics of the solution" = Subproblems . To address these charactersitics/subproblems, we need to get specific. Hence, below each Subproblem , listed in Roman numerals, are the specifications written in pseudocode . Note, that with the exception of subproblem "a" and c" , I relied heavily on Chatgpt to write out the specifics for the other subproblems as I had zero experience with the commands required. a. Read the scrapped data from the text file (news-celebjets.txt). b. Find the publishing dates of the news articles. c. Sort the publishing dates. e. Plot a histogram to represent the number of article count per week Figure 1: Abstraction Algorithmic Solution and the Agile Process An algorithm is a well-defined sequence of instructions that takes one or more input values and produces output values. Per the abstraction above, we have an idea of the desired solution's input, output and sequence of instructions. In the main.py file attached, I have generated a solution - pictured below in Figure 2. Following the decomposition phase's specifics, I have extensively commented throughout the lines of code on my reasoning and methods, which I shall not repeat here. Instead, in this section, I would comment on the role of Chatgpt in my Agile solution creation process. Figure 2: My Final Histogram Previous Next

  • Photography Tool: Black & White Conversion | Akweidata

    < Back Photography Tool: Black & White Conversion A basic photo editor to convert PNG pictures from color to Black and White Previous Next

  • Scrapping Oil related articles | Akweidata

    < Back Scrapping Oil related articles Run on python via GoogleCollab # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup import pandas as pd # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: # Extracting the title and link title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" # Extracting the date time_element = article.find('time') date = time_element['datetime'] if time_element and 'datetime' in time_element.attrs else time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Future Projects: Relation of frequency of Oil related posts and sustainability risks Relation of frequency of Oil related posts and Stock Prices (General & Oil producing/intensive firms) Updated Code # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from bs4 import BeautifulSoup import pandas as pd import time from datetime import datetime, timedelta import re # Function to convert various date formats to a standardized format def convert_relative_date(text): current_datetime = datetime.now() current_year = current_datetime.year if 'hour' in text or 'hours' in text: return current_datetime.strftime('%Y-%m-%d') elif 'day' in text or 'days' in text: match = re.search(r'\d+', text) days_ago = int(match.group()) if match else 0 return (current_datetime - timedelta(days=days_ago)).strftime('%Y-%m-%d') elif 'minute' in text or 'minutes' in text: return current_datetime.strftime('%Y-%m-%d') elif 'yesterday' in text.lower(): return (current_datetime - timedelta(days=1)).strftime('%Y-%m-%d') else: try: parsed_date = datetime.strptime(text, '%b %d') return datetime(current_year, parsed_date.month, parsed_date.day).strftime('%Y-%m-%d') except ValueError: return text # Return the original text if parsing fails # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Scroll the page to load more articles for _ in range(5): # Adjust the range for more or fewer scrolls driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.END) time.sleep(2) # Wait for page to load # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" time_element = article.find('time') date = time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) # Convert dates to a standardized format for i, row in df.iterrows(): df.at[i, 'Date'] = convert_relative_date(row['Date']) # Save the DataFrame to CSV csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Previous Next

  • Cocoa Production: Ghana and Ivory Coast - 2022 | Akweidata

    < Back Cocoa Production: Ghana and Ivory Coast - 2022 Summary of Cocoa Production in Ghana and Ivory Coast in 2022. Previous Next

  • Convering Excel to CSV: Web Application | Akweidata

    < Back Convering Excel to CSV: Web Application A basic Web application written in HTML and Javascript to convert excel files to CSV. Github: https://github.com/akweix/excel_to_csv Previous Next

  • Hedonic Valuation Model: Real Estate in Zurich | Akweidata

    < Back Hedonic Valuation Model: Real Estate in Zurich With significant portions of banks portfolios consisting of mortgage loans, it is paramount to develop a strong model for valuating real estate. https://akweix.shinyapps.io/HedonicValuationModel/ Previous Next

  • Cocoa Production: Ghana and Ivory Coast - Historic Trend | Akweidata

    < Back Cocoa Production: Ghana and Ivory Coast - Historic Trend Work in progress Previous Next

  • Beta of Fan Milk Ltd (FML): Ghana Stock Exchange (GSE) | Akweidata

    < Back Beta of Fan Milk Ltd (FML): Ghana Stock Exchange (GSE) Finding the Beta of FML on the GSE using Python (Jupyter Notebook) FML Beta Results Timeframe Raw Beta Adjusted Beta 1-Month 0.563058 0.708706 3-Month 0.145659 0.430439 1-Year 0.408104 0.605403 2-Year 0.356980 0.571320 3-Year 0.366631 0.577754 5-Year 0.350336 0.566891 10-Year 0.372667 0.581778 View realtime data on FML via my GSE Stock data viewer: https://www.akweidata.com/projects-1/ghana-stock-exchange%3A-real-time-prices-web-app-v1 Data FML data retrieved from the Ghana Stock Exchange Website: https://gse.com.gh/trading-and-data/ GSE-CI data retrieved from Eikon Refinitiv Code import pandas as pd GSECI = pd.read_excel("GSECIdata") FML = pd.read_excel("FMLdata") GSECI.head() FML.head() GSECI.dtypes FML.dtypes # Convert the date columns to the same format # Assuming the date columns are named 'Date' in both dataframes GSECI['Date'] = pd.to_datetime(GSECI['Date'], format='%m/%d/%Y') FML['Date'] = pd.to_datetime(FML['Date'], format='%d/%m/%Y') # Now merge the dataframes on the 'Date' column combined_data = pd.merge(FML, GSECI, on='Date', suffixes=('_FML', '_GSECI')) # Display the first few rows of the combined dataframe to check the merge print(combined_data.head()) # Calculate daily returns for FML and GSE-CI combined_data['Return_FML'] = combined_data['Close_FML'].pct_change() combined_data['Return_GSECI'] = combined_data['Close_GSECI'].pct_change() # Drop the NaN values that result from pct_change() combined_data = combined_data.dropna() # Calculate covariance between FML's and GSE-CI's returns covariance_matrix = combined_data[['Return_FML', 'Return_GSECI']].cov() covariance = covariance_matrix.loc['Return_FML', 'Return_GSECI'] # Calculate the variance of GSE-CI's returns variance_gseci = combined_data['Return_GSECI'].var() # Calculate beta of FML beta_fml = covariance / variance_gseci print(f"The beta of FML is: {beta_fml}") import numpy as np import pandas as pd # Assuming 'combined_data' has already been defined and contains daily return data # Define a function to calculate raw and adjusted beta def calculate_beta(return_stock, return_market): covariance = return_stock.cov(return_market) variance = return_market.var() raw_beta = covariance / variance # Adjusted beta is calculated with the formula (2/3 * raw_beta + 1/3) adjusted_beta = (2/3 * raw_beta) + (1/3) return raw_beta, adjusted_beta # Define time frames in trading days time_frames = { '1-Month': 21, '3-Month': 63, '1-Year': 252, '2-Year': 504, '3-Year': 756, '5-Year': 1260, '10-Year': 2520 } # List to store beta values beta_values = [] # Calculate beta for each time frame for period, days in time_frames.items(): if days < len(combined_data): # Slice the last 'days' of trading data for the period period_data = combined_data.tail(days) raw_beta, adjusted_beta = calculate_beta(period_data['Return_FML'], period_data['Return_GSECI']) beta_values.append({'Timeframe': period, 'Raw Beta': raw_beta, 'Adjusted Beta': adjusted_beta}) # Convert the list of dictionaries to a DataFrame beta_df = pd.DataFrame(beta_values) # Print the beta values in tabular form print(beta_df.to_string(index=False)) Previous Next

  • Dynamic view of Ghana's Forestry | Akweidata

    < Back Dynamic view of Ghana's Forestry Work in progress Previous Next

  • Commentary: Washington’s Decision to “Normalize” Relations with Cuba..." | Akweidata

    < Back Commentary: Washington’s Decision to “Normalize” Relations with Cuba..." An economic commentary on the article "Washington’s Decision to “Normalize” Relations with Cuba: Impede China’s Growing Influence in Latin America" Date the commentary was written: 21/ 09 /2016 Read the original article on Global Research : " Washington’s Decision to “Normalize” Relations with Cuba: Impede China’s Growing Influence in Latin America?" by Birsen Filip - 28.08.2016 The article under consideration is about the possible lifting of the Cuban embargo imposed by the American Government in 1936. The idea of removing this historic embargo has been introduced recently and is in the process of becoming a reality due to Barrack Obama. Barrack Obama, the present president of the United States according to the article, shocked the world by officially reestablishing diplomatic relations with Cuba and furthermore slowly lifting the historical embargo. However, this article explores the embargo lifting as a means of the USA to impede China’s International Market power growth in Latin America in light of the recent trade deal between China and Cuba. In this commentary, I shall be exploring the probable effects of lifting the embargo, with respect to the International Market and the Cuban economy. According to the article, it can be deduced that the USA is trying to prevent China from becoming a “monopoly” in the International Market. An embargo is a government order that restricts commerce or exchange with a specified country or the exchange of specific goods. An embargo is usually created as a result of unfavorable political or economic circumstances between nations. The restriction looks to isolate the country and create difficulties for its governing body, forcing it to act on the underlying issue. [1] In the case of the US embargo on Cuba, it is due to the relation Cuba was having with Communist powers. The Cuban embargo majorly affected the tourism in Cuba, sugar production, many other agricultural sectors and cigar firms. Figure 1: Current agricultural production in the Cuban economy As illustrated on the graph above, as the embargo technically prohibits Cuba from trading internationally (as the USA “penalizes” other countries that trade with Cuba) their agricultural goods although having an advantage of the lower price in comparison to the world price, Cuba cannot exploit that advantage. However, if the embargo is to be lifted Cuba would benefit greatly as they can produce many agricultural goods at a lower price than most countries and furthermore specialize in agricultural goods to even greatly increase their production. This would lead to an increase in jobs, increase in GDP and incomes in Cuba. Due to the embargo many goods and services have to be produced domestically as Cuba cannot benefit from international trade. Due to the production of a vast array of goods and services domestically Cuba cannot efficiently produce all goods and services, and the quality is quite low. For instance, it is not efficient for Cuba to produce heavy duty farming machines, whereas China having a comparative advantage in heavy duty machines can effectively produce them. A country has a comparative advantage in producing a product when it has the lowest opportunity cost for producing the product. Figure 2: Electronics and technological devices Market in Cuba currently As illustrated in the diagram above, currently Cuba’s technological industry and many other industries are producing at a higher price than the World Price. This mainly due to lack of specialization. The people of Cuba are subjected to some high priced goods and services which are very low in quality. However, if the embargo is to be lifted Cubans would have access to the lower priced, higher quality goods and services from the international market. Due to the large diversification in goods and services produced domestically, the Cuban economy has not specialized in particular products, hence does not hold any significant comparative advantage in any good or service production when compared to most countries. As Cuba would be able to trade much easier in the international market, hence would have access to cheaper raw resources from Africa and Americas, cheaper labor from Asia and greater capital from Europe and North America. Figure 3: Effect of lifting the Embargo in the Cuban Economy As shown on the diagram above, the lifting of the embargo would be highly beneficial for the Cuban economy. Aggregate demand and supply would increase. The total output of the economy increases from Y1 to Y2. The average price level of goods and services increases, but this increase is actually quite beneficial for Cuba as incomes would increase and producers make larger profits. The lack of specialization due to the embargo hinders the growth of the Cuban economy. However with the lifting of the embargo, this would increase economic activity and boost economic growth in Cuba. [1] http://www.investopedia.com/ Previous Next

  • Alternative Data Regressor: V1 | Akweidata

    < Back Alternative Data Regressor: V1 A Python Program to attain a linear regression of some alternative data against financial asset prices . A CSV file is the input. The output is the regression results. The provided Python program is designed to process time series data from a CSV file and execute a series of analytical steps based on a predefined decision tree. Key functionalities include: Reading a CSV File : The user inputs the path to a CSV file, which the program reads into a DataFrame. Stationarity Testing : It tests the time series data for stationarity using the Augmented Dickey-Fuller test. Adjusting for Non-Stationarity : If the data is non-stationary, it applies a log transformation to stabilize the time series. Re-testing for Stationarity : After transformation, it retests the data for stationarity. Significance Testing : Conducts an Ordinary Least Squares (OLS) regression to test the significance of the relationship between the time series and a dependent variable. Model Development and Evaluation : If a significant relationship is found, the program proceeds to develop a baseline regression model, which is then refined and evaluated based on its R-squared value. Output : The program outputs the results of the stationarity tests, significance tests, and the R-squared value of the regression model. import pandas as pd import numpy as np from statsmodels.tsa.stattools import adfuller from statsmodels.regression.linear_model import OLS import statsmodels.api as sm from scipy import stats import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score def test_stationarity(timeseries): # Perform Dickey-Fuller test: dftest = adfuller(timeseries, autolag='AIC') return dftest[1] # p-value def adjust_non_stationarity(data): # Adjusting for non-stationarity (example: log transformation) return np.log(data) def significance_testing(X, y): # Perform significance testing (example: OLS regression) X = sm.add_constant(X) # adding a constant model = OLS(y, X).fit() return model.pvalues def main(): # Load data file_path = input("Enter the path to your CSV file: ") df = pd.read_csv(file_path) # Assuming the time series column is named 'timeseries' timeseries = df['timeseries'] # Step 1: Test for Stationarity if test_stationarity(timeseries) > 0.05: # Step 2: Adjust Data for Non-Stationarity timeseries = adjust_non_stationarity(timeseries) # Step 3: Re-test for Stationarity if test_stationarity(timeseries) > 0.05: print("Data is still non-stationary after transformation. Ending process.") return else: print("Data is stationary after transformation. Proceeding with analysis.") else: print("Data is stationary. Proceeding with analysis.") # Step 4: Significance Testing # Assuming another column 'dependent_var' as the dependent variable pvalues = significance_testing(df[['timeseries']], df['dependent_var']) if any(pval < 0.05 for pval in pvalues[1:]): # Ignoring the constant's p-value print("Significant correlation found. Proceeding to model development.") else: print("No significant correlation found. Ending process.") return # Steps 5, 6, 7: Develop, Refine, and Evaluate Regression Model # This is a simplified example using OLS regression X_train, X_test, y_train, y_test = train_test_split(df[['timeseries']], df['dependent_var'], test_size=0.2, random_state=0) model = OLS(y_train, sm.add_constant(X_train)).fit() predictions = model.predict(sm.add_constant(X_test)) print("Model R-squared:", r2_score(y_test, predictions)) # Step 8: Interpret the Regression Line # This step is more analytical and depends on the specific model and data # Step 9: Comparative Analysis if __name__ == "__main__": main() Previous Next

  • Commentary: Ghana fixes new cocoa price to control smuggling | Akweidata

    < Back Commentary: Ghana fixes new cocoa price to control smuggling An economic commentary on the Article, "Ghana fixes new cocoa price to control smuggling" Date the commentary was written: 0 6/ 12 /2015 Read the original article on Theafricareport.com : Ghana fixes new cocoa price to control smuggling | West Africa by Dasmani Laary - 05.10.2015 The article under consideration is about an increase in the fixed price of cocoa in Ghana in order to curb the smuggling of cocoa into Ivory Coast. Ivory Coast and Ghana share a boarder cutting through their respective cocoa plantations hence, smuggling easily occurs. The article is also about a subsidy granted to the cocoa farmers to raise their output. The Ghanaian government imposed a higher fixed price of cocoa, this can viewed as the government imposing a higher minimum price on cocoa as the fixed price is above the equilibrium price. A fixed price is a market price imposed by the government and producers are only allowed to sell at exactly that price. Cocoa Board is a government-controlled institution, fixes the buying price for cocoa in Ghana. Thus the cocoa market in Ghana is planned. The price-fixing is to protect cocoa farmers from volatile prices on the world market as the article says. From the article under consideration, the new fixed price of cocoa per ton is $1759 is an increase from the former $1444. This increase would prevent cocoa farmers from smuggling their cocoa to Ivory Coast to sell it at the once better price of $1718 per ton. Price elasticity of demand or supply refers to the responsiveness of quantity demanded or quantity supplied due to a change in price. The price elasticity of demand for cocoa is relatively elastic as Ivorian (and South American cocoa) are perfect substitutes. The supply of Ghanaian cocoa is also elastic because of the smuggling of cocoa between Ghana and Ivory Coast which depends on price hence in effect affects the supply of Ghanaian cocoa positively or negatively. So if the price is high in Ghana (higher than Ivory Coast Cocoa) smuggling from Ghana to Ivory Coast would be curbed and rather cocoa grown in Ivory Coast would be smuggled into Ghana hence increasing the supplied quantity of Ghanaian Cocoa. The effect of increasing the fixed price is shown on this diagram: Figure 1: Increasing the fixed price As a result of this increase in quantity supplied there would be an excess supply (QS1 to QS2) of Ghanaian cocoa. However cocoa can be stored for a long time without losing its quality, but the government would then need to spend more on storage facilities (as the article makes reference to warehouses being built). An increase in the fixed price would also make Ghanaian cocoa less competitive globally, as Ivorian cocoa (and South American cocoa) would be winning the price war. This may be prove costly and highly inefficient for the Ghanaian government as 15% of their GDP alone is from cocoa. However the aim of increasing the fixed price was to curb the smuggling of cocoa to Ivory Coast from Ghana and this action would achieve this aim. But not only would the policy do so, but it would also stimulate the smuggling of cocoa from Ivory Coast to Ghana, reversing the tables. Although I have discussed the decrease in international competitiveness of Ghanaian cocoa (as PED is relatively elastic) that does not necessarily mean there shall be a decrease in revenue. Firstly at the old and new fixed price are relatively higher than the equilibrium price, hence PED may be inelastic at those high prices. Also, due to multiple contracts and deals in place with cocoa processing firms, Ghanaian farmers would still be able to sell to their previous customers for example Nestle. Not only that, but as there would be a higher supply of cocoa from Ghana, Ghanaian farmers would be able to meet their contract obligations with those cocoa processing firms whereas due to the reduction in cocoa in Ivory Coast, Ivorian farmers may not meet their obligations hence their deals and contracts would be passed on to Ghanaian firms In a bid to further increase the supply of cocoa in Ghana (as the article refers to the targeted 900,000 tones output for the 2015/2016 which is an increase from the actual 700,000 output of the previous year) the government is giving cocoa farmers a subsidy. The bonus of 5 cedi per bag of 64 kilogrammes, is a subsidy per unit. A subsidy is financial aid given to producers by the government in order to decrease their cost of production and in effect increase their total output. The effect of a subsidy is shown in Figure 2; F igure 2: Effect of subsidy in the Ghanaian Cocoa Market As illustrated in Figure 2 producers (Ghanaian farmers) would have a higher revenue due the subsidy, however the Ghanaian government would have to pay a lot for this subsidy. Either way, this decreases the cost of production for farmers and in effect are able to produce more cocoa. The combination of the two policies discussed in this paper is simply going to lead to a very large increase in supply of cocoa in Ghana. However this may not be so beneficial for all stakeholders. Initially farmers may enjoy larger incomes, but may have to eventually sell off the cocoa in excess supply at a lower price due to expensive storage. Government expenditure would increase due to the subsidy and the stabilization fund discussed in the article. However if the demand for cocoa continues to increase in the global market, Ghanaian government and farmers would benefit greatly. Previous Next

bottom of page