Python: Pass environment variables as a configuration file

Today, I got a task to pass all environment variables as a configuration files. It will make it easy to pass different parameters based on the running environment (such as development or production). I used “python-dotenv” python library to do this task. The below code snippet shows how you can do it in your program.

env_path = 'path to file'
load_dotenv(dotenv_path=env_path)
# now you can load the environment variables
greeting = os.getenv('greeting', '')
name = os.getenv('name', 'default')

The configuration file looks like this
greeting=hello
name=world!

Thank you for reading.

Python Convert a Cron Expression to the Total Number of Runs/Executions in One Day

Recently, I was working on a project and required to convert cron expression to the total number of executions in one day. There are some libraries in Python that help you to convert cron expressions into human-readable (https://pypi.org/project/cron-descriptor/) texts. However, in my case, I needed to get the total execution.

I found a library (https://pypi.org/project/croniter/) that can give you the exact execution time. I used it to extract the total number of execution in a day.

import croniter
import datetime

def getTotalNumberOfExecution(cron_str):
    try:
        cur_date = datetime.datetime.now()
        #put the current day (start from 00:00AM) to get its all executions
        now = datetime.datetime(year=cur_date.year,month=cur_date.month,day=cur_date.day,hour=0,minute=0,second=0)
        cron = croniter.croniter(cron_str, now)
        nextdate = cron.get_next(datetime.datetime)
        count = 0
        while nextdate.day <= cur_date.day: #loop over the iterations until it goes to the next day
            nextdate = cron.get_next(datetime.datetime)
            count += 1
       return count
   except:
       if cron_str == "@once": #here you can handle cron strings starts with @
            return 1
       return 0

# Main #
cron_str = "0 * * * *"
print(getTotalNumberOfExecution(cron_str))

I hope the above code would help to complete this task. The code is self-explanatory, but if you have any question you can ask me in the comments.

Singleton Design Pattern in Python

Recently, I required to create a singleton class in Python. I am putting the code for your reference.

class Singleton:
    # Here will be the instance stored.
    __instance = None
    
    @classmethod
    def get_instance(cls) -> 'Singleton':
        if cls.__instance is None:
            cls.__instance = cls()
        return cls.__instance

    @classmethod
    def clear_instance(cls) -> None:
        cls.__instance = None

    def anyofyourfunction():
        return None

Query BigQuery with Python

This week, I wanted to connect with BigQuery using Python. BigQuery is a service from Google, which helps to store and process large volumes of data quickly. We are using it to store Google billing data and wanted to generate some reports by executing different SQL queries.

from google.cloud import bigquery
from google.oauth2 import service_account

auth_json = Your JSON for authentication

credentials = service_account.Credentials.from_service_account_info(auth_json)

client = bigquery.Client(project="project name", credentials=credentials)

query_job = client.query("""your query inside""")

results = query_job.result()  # Waits for job to complete.

for row in results:
    print("{}".format(row.id, any other fields))


 

The above code shows, how we can connect and query the bigquery. It returns an iteratable, which can allow looping over the rows and access each column as shown in the code.

I hope it would help you in your big projects.

Import data to Elasticsearch from CSV

In a recent project, I required to import data from CSV to Elasticsearch. I found an online post on Stackoverflow (https://stackoverflow.com/questions/41573616/index-csv-to-elasticsearch-in-python) for this task. This post provides a way to bulk insert into Elasticsearch. However, in my case, I was getting Unicode error when I tried bulk insertion.

I solved this problem by inserting individually. There was not many records and speed was also not that bad. I used the following code to accomplish my task.

from elasticsearch import helpers, Elasticsearch
import csv

es = Elasticsearch([“localhost”])
i = 1
with open(‘data_v4_cats.csv’,’r’) as f:

reader = csv.DictReader(f)

for i,l in enumerate(reader):

res1=es.index(index=’categories’,doc_type=’category’,id=i,body=l)
print(i, res1)
i = i + 1