Search Results
58 items found for ""
- Implementing design thinking for efficiency in Ghana's public sector | Akweidata
< Back Implementing design thinking for efficiency in Ghana's public sector This research paper introduces a remedy for Ghana's weak public sector by exploring the implementation of Design Thinking into the sector's practices. The study employs a sequential mixed-method approach. Using Key Performance Indicators, the quantitative approach assessed the efficiency of Ghana's Public sector. Secondly, using the qualitative method of document analysis, insights on the use of design thinking in several public sectors were brought to light. Results revealed that it is best to employ design thinking in Ghana's public sector for the sake of improving the service delivery of public services. Due to the weakness of Ghana's public sector, core societal needs such as sanitation and health care access are inadequately met. As such, said areas are likely to be the best beneficiaries of the design approach in Ghana's public sector. Results also pointed out that if design thinking is to be implemented in Ghana's public sector, government projects should be rolled out by local governments as opposed to the federal level and that significant expertise would need to come via private sector collaborations. Undergrad Thesis .pdf Download PDF • 1.01MB Previous Next
- Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana | Akweidata
< Back Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana A comprehensive tool for understanding the Real-time Efficiency of Oil and Gas production in Ghana https://akweix.shinyapps.io/trial_app/ Welcome to my R Shiny web app, the "Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana.” This web app leverages a myriad of data science techniques, including interactive visualizations, machine learning, sentiment analysis, natural language processing, data analytic tools and web scraping, to provide real-time, comprehensive analysis of Ghana’s oil and gas sector. The goal is to enhance information efficiency, market efficiency, and resource management efficiency, making it a valuable tool for practitioners, academics, and policymakers alike. The application is primarily centred on Ghana, especially regarding the visualizations. However, the data analytic tools developed can be applied to all markets and regions. Additionally, despite the application presenting key insights and tools that are applicable to both the Oil and Gas industry, greater emphasis was placed on Oil production due to its overall greater share of Ghana’s energy market and its more dynamic nature. anum_sean_data_science_final_report .pdf Download PDF • 2.66MB Previous Next
- Manipulating File Paths: Backward to Foward Slashes | Akweidata
< Back Manipulating File Paths: Backward to Foward Slashes A program made to convert backward slashes in file path names to foward slashes. Targeted for Windows users when copying paths to R or Pthon. Previous Next
- Value at Risk (VaR) for a portfolio | Akweidata
< Back Value at Risk (VaR) for a portfolio Simple Tool using a historical simulation to find VaR Previous Next
- Electricity Consumption as a proxy of production: Draft 1 | Akweidata
< Back Electricity Consumption as a proxy of production: Draft 1 Using publicly available data on Swiss Power Consumption, this exploration seeks to identify an association with power consumption and select firms output https://www.swissgrid.ch/en/home/operation/grid-data/current-data.html#wide-area-monitoring https://www.ewz.ch/en/about-ewz/newsroom/current-issues/electricity-shortage/city-zurich-energy-consumption.html https://data.stadt-zuerich.ch/dataset/ewz_stromabgabe_netzebenen_stadt_zuerich https://data.stadt-zuerich.ch/group/energie Work in Progress Previous Next
- Expected Loss Calculator | Akweidata
< Back Expected Loss Calculator A simple tool to calculate the Expected Loss for a credit portfolio. Previous Next
- Cocoa Production: West Africa - 2022 | Akweidata
< Back Cocoa Production: West Africa - 2022 Work in progress Previous Next
- Smartphone App for University Students | Akweidata
< Back Smartphone App for University Students An all-purpose app for Ashesi students. Project from 2017 Life at Ashesi University, like any university, can be overwhelming and disorganized. To streamline this experience, I suggest the development of a versatile mobile app that centralizes various essential services, thereby aiding in effective time management for students. The university offers a range of services including student support, counseling, and tutoring. However, accessing these services often proves to be a cumbersome and time-consuming process. In addition to these, many students are unaware of the contact details for on-campus emergency services and national emergency numbers in Ghana. In critical situations, this lack of information could lead to wastage of precious time. To tackle these issues, the proposed app would be a comprehensive solution. It would feature functionalities like accurate weather forecasts by integrating with the Accuweather website for Berekuso forecasts, a meal plan balance checker linked with the Ashesi meal plan webpage, and a digital menu for campus eateries like Akornor and Big Ben. Additionally, the app would include a directory of contact details for Ashesi’s various services and emergency services, with the added convenience of calling these contacts directly from the app. This integration would ensure that all necessary information and services are readily accessible to students, thereby enhancing their university experience and safety. Pseudocode 1. When the app is started the homepage is displayed. 2. The homepage displays titles “Meal Plan,” “Weather,” “Ashesi Services,” “Food” and “Emergency Services.” 3. If “Meal Plan” is selected, the webpage of the Ashesi Meal plan is displayed. 4. If “Weather” is selected, the webpage for accuweather (set for Berekuso) is displayed. 5. If “Food” is selected restaurants in Ashesi are displayed. 6. Select any restaurant and their menu shall be displayed. 7. If “Ashesi Services” is selected a list of Ashesi Services are displayed. 8. Select any service and their contact details is displayed for calling . 9. If “Emergency Services” is selected a list of Emergency Services are displayed. 10. Select any emergency service and their contact details is displayed for calling . Figure 1: Flowchart * Due to the senstivity of some information within the app, kindly request for access. Upon access being granted, the links below shall be temporarily activated. Download APK via Github: https://github.com/akweix/Ash-App Download Android App via Thunkabale: https://x.thunkable.com/copy/b63301e1a6082169dd0d9aa036ac119d Previous Next
- Scrapping Data using Python | Akweidata
< Back Scrapping Data using Python A Python application designed to generate a histogram depicting the frequency of articles published on Google News in 2022 concerning '@celebjets'. Background Created by then teenager Jack Sweeny in 2020, @celebjets (now suspended) was a twitter account that tracked the location of celebrities' jets. The account gained worldwide notoriety through 2021 and 2022 mainly due to Jack Sweeny's altercation with Elon Musk regarding privacy and safety concerns with tracking Elon's Jet. But more importantly the posts from the account brought to light conversations on the 'vanity-filled' lifestyle of celebrities and the significant CO2 footprints they leave with their obnoxious use of private jets. Problem Formulation, Decomposition and Abstraction With the given prompt at hand, we need to exhaustively understand the problem space in order to efficiently and effectively move from the undesired to the desired state of affairs. The problem at hand is not monolithic, thus, we need to break it down before conceptualizing any solutions. Breaking down the problem requires the employment of the Computational Thinking concept known as Decomposition . By separating the problem at hand into sub-problems, the task becomes more approachable as one is quickly able to see how possible conceptual frameworks (in the form of existing python commands) can be employed and knitted to solve the problem. However, before diving into decomposition, we need to understand that the prompt does not encapsulate the entire problem space. Key components of the problem space deal with the nature of the file news-celebjets.txt. How is the data organized? Where are the dates stated? What is the format of the dates? How can we work with this format? Thus, I ran the html code and manually viewed a sample (the first ten) of the articles to attain a brief idea of the nature of the data. My findings were as follows: The data is not primary but secondary data: some analysis has already been made. Data is very well structured: Article Cover Picture; Logo and name of Publisher; Title of Article (Hyperlinked); The Date List of Articles appears to consistently follow the structure stated above. The dates appear to have the same format through out: Month and day; example: Dec 14 Intuitively, from my findings, I have already applied the Computational Thinking concept of Patterns and Generalizations. By identifying the repeated structure of the list of articles, I wondered, can loops or some other iterative command assist me with extracting the dates? With these insights at hand, I attained a greater understanding of the problem space. Consequently, I employed the General to Specific decomposition technique The General to Specific technique involves breaking down a problem from a general perspective and then adding specific and more detailed components . For the given problem, as it is not open-ended and specific requirements were given, I found this technique to be the most appropriate. The results of my decomposition are as follows: General Problem: Analyze news articles and create a histogram representing the number of articles published per week Listed below with the alphabets a, b, c,d and e are the "definitions of the desired characteristics of the solution" = Subproblems . To address these charactersitics/subproblems, we need to get specific. Hence, below each Subproblem , listed in Roman numerals, are the specifications written in pseudocode . Note, that with the exception of subproblem "a" and c" , I relied heavily on Chatgpt to write out the specifics for the other subproblems as I had zero experience with the commands required. a. Read the scrapped data from the text file (news-celebjets.txt). b. Find the publishing dates of the news articles. c. Sort the publishing dates. e. Plot a histogram to represent the number of article count per week Figure 1: Abstraction Algorithmic Solution and the Agile Process An algorithm is a well-defined sequence of instructions that takes one or more input values and produces output values. Per the abstraction above, we have an idea of the desired solution's input, output and sequence of instructions. In the main.py file attached, I have generated a solution - pictured below in Figure 2. Following the decomposition phase's specifics, I have extensively commented throughout the lines of code on my reasoning and methods, which I shall not repeat here. Instead, in this section, I would comment on the role of Chatgpt in my Agile solution creation process. Figure 2: My Final Histogram Previous Next
- Photography Tool: Black & White Conversion | Akweidata
< Back Photography Tool: Black & White Conversion A basic photo editor to convert PNG pictures from color to Black and White Previous Next
- Scrapping Oil related articles | Akweidata
< Back Scrapping Oil related articles Run on python via GoogleCollab # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup import pandas as pd # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: # Extracting the title and link title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" # Extracting the date time_element = article.find('time') date = time_element['datetime'] if time_element and 'datetime' in time_element.attrs else time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Future Projects: Relation of frequency of Oil related posts and sustainability risks Relation of frequency of Oil related posts and Stock Prices (General & Oil producing/intensive firms) Updated Code # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from bs4 import BeautifulSoup import pandas as pd import time from datetime import datetime, timedelta import re # Function to convert various date formats to a standardized format def convert_relative_date(text): current_datetime = datetime.now() current_year = current_datetime.year if 'hour' in text or 'hours' in text: return current_datetime.strftime('%Y-%m-%d') elif 'day' in text or 'days' in text: match = re.search(r'\d+', text) days_ago = int(match.group()) if match else 0 return (current_datetime - timedelta(days=days_ago)).strftime('%Y-%m-%d') elif 'minute' in text or 'minutes' in text: return current_datetime.strftime('%Y-%m-%d') elif 'yesterday' in text.lower(): return (current_datetime - timedelta(days=1)).strftime('%Y-%m-%d') else: try: parsed_date = datetime.strptime(text, '%b %d') return datetime(current_year, parsed_date.month, parsed_date.day).strftime('%Y-%m-%d') except ValueError: return text # Return the original text if parsing fails # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Scroll the page to load more articles for _ in range(5): # Adjust the range for more or fewer scrolls driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.END) time.sleep(2) # Wait for page to load # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" time_element = article.find('time') date = time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) # Convert dates to a standardized format for i, row in df.iterrows(): df.at[i, 'Date'] = convert_relative_date(row['Date']) # Save the DataFrame to CSV csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Previous Next
- Cocoa Production: Ghana and Ivory Coast - 2022 | Akweidata
< Back Cocoa Production: Ghana and Ivory Coast - 2022 Summary of Cocoa Production in Ghana and Ivory Coast in 2022. Previous Next