App Engine Series #4: The Controllers
Important: As of July 2015, this tutorial no longer works, as App Engine has shut down the Master/Slave Data Store that the application uses. We are keeping it online for reference purposes and you can still download the code, but it needs to be converted to the newer High Availability Data Store to work.
This is the fourth part of our App Engine series, where we are building an uptime dashboard web application using Google's powerful App Engine platform and tools. Read part three, where we started working with the webapp framework.
Picking up where we left last time, we will take a look at the controllers that comprise our app engine application. The first stop is main.py, our main routing file and entry point for the app.
Main.py
What this file does, is associate a URL with a controller class. You can think of it as our "index.php".
#!/usr/bin/env python # Importing the controllers that will handle # the generation of the pages: from controllers import crons,ajax,generate,mainh # Importing some of Google's AppEngine modules: from google.appengine.ext import webapp from google.appengine.ext.webapp import util # This is the main method that maps the URLs # of your application with controller classes. # If a URL is requested that is not listed here, # a 404 error is displayed. def main(): application = webapp.WSGIApplication([ ('/', mainh.MainHandler), ('/crons/5min/', crons.FiveMinHandler), ('/crons/1day/', crons.OncePerDayHandler), ('/ajax/24hours/', ajax.TwentyFourHours), ('/ajax/7days/', ajax.SevenDays), ('/ajax/30days/', ajax.ThirtyDays), ('/generate-test-data/', generate.GenerateTestData) ],debug=True) util.run_wsgi_app(application) if __name__ == '__main__': main()
Each of our controller Python source files are imported as modules. The URL - handler pairs are represented as a list of tuples, passed to the WSGIApplication()
method of the webapp framework we discussed last time. After this, we only need to pass the returned application object to util.run_wsgi_app()
which will set up our uptime dashboard app.
In the rest of this tutorial, we are going to cover these controller classes.
Mainh.py
This is the handler for the main page. Its main task is to render the template we created last time.
#!/usr/bin/env python import os from datetime import datetime from config import config from google.appengine.ext import webapp from google.appengine.ext.webapp import template # This controller handles the # generation of the front page. class MainHandler(webapp.RequestHandler): def get(self): # We are using the template module to output the page. path = os.path.join(os.path.dirname(__file__), '../views' ,'index.html') self.response.out.write( # The render method takes the path to a html template, # and a dictionary of key/value pairs that will be # embedded in the page. template.render( path,{ "title" : config.scriptTitle, "year" : datetime.now().strftime("%Y"), "domain": config.fetchURL.replace('http://','').replace('/','') }))
The render method takes two arguments - the path to the template, and a hash map with key / value pairs. These are inserted into the template at the specified by template tags places, and the return value of this method is written as HTML output to the browser.
Ajax.py
This file handles the AJAX requests, issued by jQuery. Instead of HTML, we need to generate JSON, which is as easy as calling the dumps method of the simplejson module.
#!/usr/bin/env python # Including the models: from models.models import * # We are using django's simplejson # module to format JSON strings: from django.utils import simplejson from datetime import datetime,timedelta from google.appengine.ext import webapp, db from google.appengine.api import memcache # The AJAX controllers: class TwentyFourHours(webapp.RequestHandler): def get(self): # This class selects the response times # for the last 24 hours. # Checking whether the result of this # function is already cached in memcache: jsonStr = memcache.get("TwentyFourHoursCache") # If it is not, we need to generate it: if jsonStr is None: query = db.GqlQuery("SELECT * FROM Ping WHERE date>:dt ORDER BY date", dt=(datetime.now() - timedelta(hours = 24))) results = query.fetch(limit=300) chart = [] for Ping in results: chart.append({ "label": Ping.date.strftime("%H:%M"), "value": Ping.responseTime }) jsonStr = simplejson.dumps({ "chart" : { # tooltip is used by the jQuery chart: "tooltip" : "Response time at %1: %2ms", "data" : chart }, "downtime" : getDowntime(1) }) # Caching it for five minutes: memcache.add("TwentyFourHoursCache", jsonStr, 300) self.response.out.write(jsonStr); class SevenDays(webapp.RequestHandler): days = 7 def get(self): # Selecting the response times for the last seven days: query = db.GqlQuery("SELECT * FROM Day WHERE date>:dt ORDER BY date", dt=(datetime.now() - timedelta(days = self.days))) results = query.fetch(limit=self.days) chart = [] for Day in results: chart.append({ "label": Day.date.strftime("%b, %d"), "value": Day.averageResponseTime }) self.response.out.write(simplejson.dumps({ "chart" : { "tooltip" : "Average response time for %1: %2ms", "data" : chart }, "downtime" : getDowntime(self.days) })) # Extending the SevenDays class and only # increasing the days member: class ThirtyDays(SevenDays): days = 30 def getDowntime(days=1): # Checking whether the result of this function # already exists in memcache. Notice the key for get(): downTimeList = memcache.get("DownTimeCache"+str(days)) if downTimeList is None: query = db.GqlQuery("SELECT * FROM DownTime WHERE date>:dt ORDER BY date", dt=(datetime.now() - timedelta(days = days))) results = query.fetch(limit=100) downTimeList = [] downTimePeriod = {} if len(results) == 0: return [] # This loop "compacts" the downtime: for DownTime in results: if not downTimePeriod.has_key("begin"): downTimePeriod = {"begin":DownTime.date,"end":DownTime.date} continue if DownTime.date - downTimePeriod['end'] < timedelta(minutes=8): downTimePeriod['end'] = DownTime.date else: downTimeList.append(downTimePeriod) downTimePeriod = {"begin":DownTime.date,"end":DownTime.date} downTimeList.append(downTimePeriod) # Formatting the output of this function: for i in downTimeList: if i['end'] + timedelta(minutes=5) > datetime.now(): i['period'] = timedeltaFormat(((datetime.now() - i['begin'])).seconds) i['end'] = "NOW" else: i['period'] = timedeltaFormat(((i['end'] - i['begin']) + timedelta(minutes=5)).seconds) i['end'] = (i['end']+timedelta(minutes=5)).strftime('%H:%M on %b, %d, %Y') i['begin'] = i['begin'].strftime('%H:%M on %b, %d, %Y') # Storing the response in memcache: memcache.add("DownTimeCache"+str(days), downTimeList, 300) return downTimeList # A helper function for formatting time periods. def timedeltaFormat(seconds): hours, remainder = divmod(seconds, 3600) minutes, seconds = divmod(remainder, 60) return ('%02d:%02d:%02d' % (hours, minutes, seconds))
As we are working with large sets of data (we select every ping in the last 24 hour period and transform it to JSON) it is a good idea to implement some sort of caching. This is the perfect usage scenario for memcache, App Engine's fast caching layer.
It is a good place to note that all our date/time calculations are done using Python's timedelta and datetime objects that make working with time periods as natural as possible. We are also selecting results from App Engine's datastore with the platform's db module.
We are using two helper functions - timedeltaFormat()
and getDowntime()
, that are used in the controller classes. The first one formats a given number of seconds into a proper hours:minutes:seconds format that is useful for displaying the downtime duration.
The second function actually does quite a bit of work. It selects the downtime objects for the given period, and "compacts" them. This would mean, that if the application detects downtime on several consecutive attempts (one every 5 min), they will be combined into a single, longer downtime.
Crons.py
This file handles cron requests and is only accessible by the cron service. Access to regular users is denied in the app.yaml configuration file, while the configuration of the cron service itself resides in cron.yaml.
This file is accessed by two cron events - the first one is executed every five minutes, and the second only once per day.
#!/usr/bin/env python import time from datetime import datetime,timedelta from google.appengine.api import urlfetch from google.appengine.ext import webapp,db from models.models import * from config import config # The cron controllers: class FiveMinHandler(webapp.RequestHandler): # Executed every five minutes, and fetches # Tutorialzine's homepage, while recording # the response time. def get(self): start = time.time() try: # Using appengine's URLFetch module: result = urlfetch.fetch( config.fetchURL, deadline=10, headers={'Cache-Control' : 'max-age=0'} ) if result.status_code == 200 and result.content.find(config.searchString) != -1: # Saving the Ping to the datastore with the put() method. Ping(responseTime = int((time.time() - start)*1000)).put() self.response.out.write("OK!") else: raise Exception('This website is offline.') except Exception, es: # If something went wrong, record a DownTime object: DownTime().put() self.response.out.write(es) class OncePerDayHandler(webapp.RequestHandler): # The get method is executed once per day, # and it creates a new Day entry from the last # 24 hours worth of pings. def get(self): query = db.GqlQuery("SELECT * FROM Ping WHERE date>:dt", dt=(datetime.now() - timedelta(hours = 24))) allPings = query.fetch(limit=300) totResponseTime = 0 avgResponseTime = 0 for ping in allPings: totResponseTime+= ping.responseTime if len(allPings)>0: avgResponseTime = totResponseTime/len(allPings) query = Day.gql("WHERE date=:dt",dt=datetime.now().date()) if len(query.fetch(limit=1)) == 0: Day(averageResponseTime=avgResponseTime,totalPings = len(allPings)).put() self.response.out.write("Done!") else: self.response.out.write("This day already exists in the datastore!")
This is where the Ping, Downtime and Day objects are created, following the models outlined in models.py that we discussed last time.
In the Five Minute handler, we use App Engine's URL fetch service to retrieve Tutorialzine's homepage. Depending on whether our search string was detected (or if an exception occurs) we create and record either a Ping object, or a Downtime one. As you saw in ajax.py, we are using these to output the uptime statistics.
In the Once Per Day handler we average the response times of the last day's pings, and create a new Day object, promptly recorded to the datastore afterwords.
Continue to part 5, where we are creating the front end.
Bootstrap Studio
The revolutionary web design tool for creating responsive websites and apps.
Learn more
Thanks for the part 4. These series are great.
A possible reason for this series its low response is because it involves Python.
Python is awesome... please keep this series going :)
thanks for sharing
This series is awesome, but I have one correction for you:
I believe this: os.path.join(os.path.dirname(file), '../views' ,'index.html')
on line 18 of Mainh.py should be os.path.join(os.path.dirname(file), '..', 'views' ,'index.html')
Please finish it! I learned a lot from this
Do you have a Python 2.7 version for this? I I tried changing your code to 2.7 but it doesn't work :(