Building a Dataset from Twitter Using Tweepy

I am always on the lookout for interesting datasets to mess about with machine learning and data visualization. Mostly I use datasets from sources like data.gov.ie which has lots of interesting datasets that are specific to Ireland. Sometimes, for the topic I am interested in, there isn’t a dataset readily available, and I want to create one. Mostly I use Twitter for this. Obviously one of the drawbacks here is that the data will be unlabeled, and if you are looking to use it in supervised machine learning then you will need to label the data which can be both laborious and time consuming. Tweepy is a great Python library for accessing the Twitter API, which is very easy to use. In this post I will demonstrate how to use this to grab tweets from Twitter, and also add some other features to the dataset that might be useful for machine learning models later.

I will demonstrate how to do this using a Jupyter notebook here, in reality you would probably want to write the dataset to a CSV file or some other format for later consumption in model training.

The first thing you will need to do is create a new application on the Twitter developer portal. This will give you the access keys and tokens which you will need to access the Twitter API. Standard access is free, but there are a number of limits which can be seen in the documentation that you should be aware of. Once you have done this, create a new Jupyter notebook, and import Tweepy and create some variables to hold your access keys and tokens.

Now we can initialize Tweepy, and grab some tweets. In this example, we will get 100 tweets relating to the term ‘trump‘. Print out the raw tweets also so you verify that your access keys work and you are actually receiving tweets.

Now that you have gotten this far, we can parse the tweet data and create a pandas dataframe to store the relevant attributes that we want. The data will come back from Twitter in JSON format, and depending on what you are looking for, you won’t necessarily want all the data. Below I am doing a bunch things:

  • Creating a new pandas dataframe and creating columns for the items I am interested in from the Tweet data.
  • Removing duplicate tweets.
  • Removing any URL’s in the tweet text – in my case I was planning on using this data in some text classification experiments, so I don’t want these included.
  • Creating a sentiment measure for the tweet text using the TextBlob library.

Click image to enlarge.

At this point, you have the beginnings of a dataset. You can also add more features to the dataset easily. In my case I wanted to add the tweet text length and the count of punctuation in the tweet text. This is easy to do. The below calculates these and adds two new columns to the dataframe.

This post hopefully illustrated how easy it is to create datasets from Twitter. The full Jupyter notebook is available on my Github here, which also has an example of generating a wordcloud from the data.

Distributed Systems Observability

This post was also featured in Issue #103 of the Distributed Systems Newsletter.

A recent project my team and I worked on involved the re-architecture of a globally distributed system to facilitate a deployment in public cloud. We learnt a lot completing this project, the most important thing being that it never ends up being a ‘lift and shift’ exercise. Many times we faced a decision to leave something as-is that was not quite as optimal as it should be, or change it during the project, potentially impacting agreed timelines. Ultimately, the decision always ended up being to go ahead and make the improvement. I am a big fan of not falling into the trap of never time to do it right, always time to fix it later.

Something else I learnt a lot about during this project is the importance of being able to observe complex system behaviors, ideally in as close to real time as possible. This is ever more important these days as the paradigm shifts to containers and serverless. Combine this with a globally distributed system and bring elements like auto-scaling into the mix and you have got a challenge on your hands in terms of system observability.

So what is observability and is it the same as monitoring the service? The definition of the term as it applies to distributed systems seems to mean different things to different people. I really like the definition that Cindy Sridharan uses in the book Distributed Systems Observability (O’Reilly, 2018):

In its most complete sense, observability is a property of a system that has been designed, built, tested, deployed, operated, monitored, maintained, and evolved in acknowledgment of the following facts:

  • No complex system is ever fully healthy.
  • Distributed systems are pathologically unpredictable.
  • It’s impossible to predict the myriad states of partial failure various parts of the system might end up in.
  • Failure needs to be embraced at every phase, from system design to implementation, testing, deployment, and, finally, operation.
  • Ease of debugging is a cornerstone for the maintenance and evolution of robust systems.

No complex system is ever fully healthy.
At first glance, this might look like a bold claim, but it is absolutely true. There will always be a component that is performing in a sub-optimal fashion, or a component that is currently on fail-over to a secondary instance. The key thing here is that when issues occur, action can be taken automatically (ideally), or manually to address the issue and ensure the overall system remains stable and within any agreed performance indicators.

Distributed systems are pathologically unpredictable.
Consider a large scale cloud service with differing traffic profiles each day. Such a system may perform very well with one traffic profile, and perform sub-optimally with another. In this example, again knowing an issue exists is critical. Some of these types of issues can be difficult to spot if the relevant observability functionality has not been built-in. Performance issues in production especially can be hidden if the right observability tools are not in place and constantly reviewed.

It’s impossible to predict the myriad states of partial failure various parts of the system might end up in.
This is especially true of complex distributed systems, and it is definitely impossible to test all failure scenarios in a very complex system in my opinion. However, the key failure scenarios that can be identified, must be tested and mitigations put in place as necessary. For anything else, monitoring points should be in place to detect as many issues as possible.

Failure needs to be embraced at every phase, from system design to implementation, testing, deployment, and, finally, operation.
There will always be issues that occur which are not caught in monitoring. Sometimes these are minor with no customer impact, sometimes not. It is important when these issue occur to learn from them, and make the necessary updates to detect them should they occur again. System monitoring points should be defined early in the project lifecycle, and tested multiple times throughout the project development lifecycle.

Ease of debugging is a cornerstone for the maintenance and evolution of robust systems.
Perhaps one of the most critical points here. When problems occur, engineers will need the necessary information to be able to debug effectively. Consider a service crash in production where you don’t get a core dump, and service logs have been rotated to save disk space. When issues occur, you must ensure that the necessary forensics are available to diagnose the issue.

So, observability is not something that we add in the final stages of a project, but something that must be thought of as a feature of a distributed system from the beginning of the project. It should also be a team concern, not just an operational concern.

Observability must be designed. The design must be facilitated in the service architecture. Observability must also be tested, something that can be neglected when the team is heads-down trying to deliver user visible features with a customer benefit. But, not to suggest that observability doesn’t have a customer benefit – in fact it is critically important not to be blind in production to issues like higher than normal latency that might be impacting customer experience negatively. In a future post, I’ll go more in-depth into the types of observability which I believe should be built-in from the start.

AWS SageMaker

I have played around with AWS SageMaker a bit more recently. This is Amazon’s managed machine learning service that allows you to build and run machine learning models in the AWS public cloud. The nice thing about this is that you can productionize a machine learning solution very quickly because the operational aspects – namely hosting the model and scaling an endpoint to allow inferences against the model – are removed. So called ‘MLOps’ has almost become a field of its own, so abstracting all this complexity away and just focusing on the core of the problem you are trying to solve is very beneficial. Of course, like everything else in public cloud, this comes at a monetary cost, but it is well worth the cost if you don’t have specialists in this area, or just want to do a fast proof-of-concept.

I will discuss here the basic flow of creating a model in SageMaker – of course some of these are general things that would be done as part of any machine learning project. The first setup you will need to do is head on over to AWS and create a new Jupyter Notebook instance in AWS SageMaker, this is where the logic for the training of the model, and deployment of the ML endpoint will reside.

Assuming you have identified the problem you are trying to solve, you will need to identify the dataset which you will use for training and evaluation of the model. You will want to read the AWS documentation for the algorithm you choose, as this will likely require the data to be in a specific format for the training process. I have found that many of the built-in algorithms in SageMaker require data in different formats, which has been a bit frustrating. I recommend looking at the AWS SageMaker examples repository, as it has detailed examples of all the available algorithms, and examples you can walk through that solve real world problems.

Once you have the dataset gathered and in the correct format, and you have identified the algorithm you want to use, the next step is to kick off a training job. It is likely your data will be stored on AWS S3, and as usual you would split into training data and data you will use later for model evaluation. Make sure that the S3 bucket where you store your data is located in the same AWS region as your Jupyter Notebook instance or you may see issues. SageMaker makes it very easy to kick off a training job. Let’s take a look at an example.

Here, I’m setting up a new training job for some experiments I was doing around anomaly detection using the Random Cut Forest (RCF) algorithm provided by AWS SageMaker. This is an unsupervised algorithm for detecting anomalous data points within a dataset.


SageMaker1

Above we are specifying things like the EC2 instance type we want the training to execute on, the number of EC2 instances, and the input and output locations of our data. The final parameters above where we are specifying the number of samples per tree and the number of trees are specific to the RCF algorithm. These are known as hyperparameters. Each algorithm will have its own hyperparameters that can be tuned, for example see here for the list available when using RCF. When the above is executed, the training process starts and you will see some output in the console, note that you will be charged for the model training time, once the job completes you will see the amount of seconds you have been billed for.

At this point, you have a model, but now you want to productionize it and allow endpoints to run inferences against it. Of course, it is not as easy as train and deploy – I am completely ignoring the testing/validation of the model and tuning based on that, as here I just want to show how SageMaker is effective at abstracting away the operational aspects of deploying a model. With SageMaker, you can deploy an endpoint, which is essentially your model hosted on a server with an API that allows queries to be run against it, with a prediction returned to the requester. The endpoint can be spun up in a few lines of code:


SageMaker2

Once you get confirmation that the endpoint is deployed – this will generally take a few minutes – you can use the predict function to run some inference, for example:


SageMaker3

Once you are done playing around with your model and endpoint, don’t forget to turn off your Jupyter instance (you don’t need to delete it), and to destroy any endpoints that you have created or you will continue to be charged.

Conclusions

AWS SageMaker is powerful in terms of putting the ability to create machine learning models and setup endpoints to serve requests to them in anybody’s hands. It is still a complex beast that requires knowledge of the machine learning process in order for you to be successful. However, in terms of being able to train a model quickly and put it into production, it is a very cool offering from AWS. You also get benefits like autoscaling of your endpoints should you need to scale up to meet demand. There is a lot to learn about SageMaker, and I’m barely scratching the surface here, but if you are interested in ML I highly recommend you take a look.

Using a Doomsday Clock to Track Technical Debt Risk

Every software team has technical debt, and those who say they don’t are lying. Even for new software, there are always items in the backlog that need attention, be they architecture trade-offs or areas of the code which are not as easy to maintain as they should be. Unless you have unlimited time and resources to deliver a project, which in reality is never, you will always have items such as these in the backlog that need to be addressed, outside of new features that need to be implemented. Mostly, but not always, technical debt items are deprioritized over such new features that generate visible outcomes, value for the customer, and revenue for the business. In my opinion, this is OK – technical debt in software projects is a fact of life, and as long as it is not recklessly introduced, and there is a plan to address it later, it is fine. It is good to look at how technical debt gets introduced. In my experience, it is mostly down to time constraints i.e. having a delivery deadline that means trade-offs must be made. Martin Fowler introduced us to the Technical Debt Quadrant which is a nice way of looking at how technical debt gets introduced. You would hope that you never end up anywhere in the top left.


tech debt quadrant

There are a few different ways of tracking technical debt, such as keeping items as labeled stories in your JIRA backlog, or using a separate technical debt register. The most important thing is that you actually track these items – and wherever you track them it is critical to continuously review and prioritize them. It is also key that you address items as you iterate on new releases of your software. When you do not address technical debt and use all your team’s work cycles to add new features (and also likely new technical debt), you will come to a tipping point. You will find it takes ever longer to add new features, or worse some technical debt items may begin to impact your production software – think of that performance trade-off you made a few years ago when you were sure the software workload would never reach this scale – now it has, and customers are being impacted. So, neglecting technical debt items that have a potential to be very impactful to your customer base is not a good idea, and these are the type of items I will discuss here.

Recently I was reading about the Doomsday clock. If you are not familiar with this:

Founded in 1945 by University of Chicago scientists who had helped develop the first atomic weapons in the Manhattan Project, the Bulletin of the Atomic Scientists created the Doomsday Clock two years later, using the imagery of apocalypse (midnight) and the contemporary idiom of nuclear explosion (countdown to zero) to convey threats to humanity and the planet. The decision to move (or to leave in place) the minute hand of the Doomsday Clock is made every year by the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which includes 13 Nobel laureates. The Clock has become a universally recognized indicator of the world’s vulnerability to catastrophe from nuclear weapons, climate change, and disruptive technologies in other domains.

So I thought, why not take this model and use it to track not existential risks to humanity, but technical debt items that pose a known catastrophic risk to a software product or service, be it a complex desktop or mobile application or a large cloud service. The items I am considering here are not small issues such as ‘I made a change and ignored the two failing unit tests‘. While these issues are still important, the items I am thinking about here are things that would cause a catastrophic failure of your software in a production environment should a certain condition or set of conditions arise, let’s take a two examples.

For the first example, let us consider a popular desktop application that relies on a third party library to operate successfully. From inception, the application has used the free version of the library, and there has always been an item in the backlog to migrate to the enterprise version to ensure long term support. Now there is a hard date for end of support six months from now, and you need to migrate to the enterprise version before that date to continue to receive security patches – which are regular, or you risk exposing your customer base.

Let us think about another example, consider a popular cloud service. The service uses a particular relational database that is key to the operation of the service and the customer value it provides. For sometime, the scaling limits of this database have been known, and due to growth and expansion into international markets, these limits are closer than ever.

The main thing here is that I am talking about known technical debt items that will cause catastrophe at some point in the future. It is important here to draw the distinction between those and unknown items to which teams will always need to be reactive.

The method I had in mind for tracking such items, taking the Doomsday Clock analogy, was as follows:

  1. You take your top X (in order of priority) technical debt items – big hitting items like those described above, you might have 5, you might even have 10.
  2. The doomsday clock starts at the same number of minutes from midnight as you have items – e.g. if you have 5 items you start at 11:55pm.
  3. Each time one of these items causes a real issue, or an issue is deemed imminent, move the time 1 minute closer to midnight. Moving the clock closer to midnight should be decided by your most senior engineers and architects.
  4. The closer you get to midnight, the more danger you are in of having these items effect your customer base or revenue.

Reaching midnight manifests in a catastrophic production issue, unhappy customers, and potential loss of revenue. Executives, especially those without an engineering background, can easily grasp the severity of a situation if you explain using the above method in my opinion. This method also keeps a focus by Product Management or whoever decides your road-map on the key items that need to be addressed – it is easy to address the small items like fixing that unit test, and think you are addressing technical debt, but in reality you are just fooling yourself and your team.

How does your team track these items?

Aside

If you are an Iron Maiden fan, their song ‘2 Minutes to Midnight’ is a reference to the Doomsday clock being set to 2 minutes to midnight in 1953, the closest it had been at that time, after the US and Soviet Union tested H-bombs within nine months of one another.

Wanderlust in Lockdown

I should be travelling to Düsseldorf in a few weeks, but in the current Covid-19 infected climate that’s not going to happen. This trip was to celebrate my birthday, which now I’ll be celebrating in lockdown at home instead. Things have changed so fast, and it really makes you appreciate things that you previously wouldn’t have thought twice about – small things like going out to dinner, and slightly bigger things like exploring Germany for a week for your birthday.

I’ve been fascinated with Germany since my first visit, and if I wasn’t Irish, I think my next most preferable nationality would be German, followed by Dutch. Düsseldorf was supposed to be our base for a week in Germany in April, and we had a few more cities on the list we were also going to visit, namely Mönchengladbach, Duisburg, Essen, and maybe Dortmund. We have traveled to Düsseldorf previously, in 2016, and loved it so much we always said we would go back. This year was supposed to be our chance, but it is not to be, at least in the first half of the year.

The city of Düsseldorf is compact, and in some ways reminds me of Cork, easily explorable for the most part on foot. I loved walking around the Alt Stadt (old town) on our first visit, sampling the local beers and food, and taking in the atmosphere, especially on Wednesday, when it’s traditional to go for drinks after work. Düsseldorf’s main city park, the Hofgarten, is something not to be missed, as is the Rheinpromenade, the promenade which runs along the River Rhine in the old town, and is lined with bars and restaurants. From the Rheinpromenade you will find it hard to miss the Rheinturm (Rhine Tower), this huge structure is visible from most anywhere on the promenade. At the top, there is an observation deck and a revolving restaurant which serves excellent food (expect to pay for it though), from where you can see excellent views of the city and beyond. Düsseldorf is also the home of one of my favorite varieties of German beer – ‘Altbier’, named for the old style of brewing used in the production process. Altbier is native to this part of Germany, and I highly recommend a few pints of ‘Frankenheim Alt’ if you find yourself in this region. I’ve been buying this beer online frequently since I first visited Düsseldorf, but there’s nothing like the taste of it while actually there. Hopefully we ‘flatten the curve’ enough for me to make a trip this year! For now I’ll need to satisfy myself with this picture from my 2016 trip.



We had also planned to go back to Holland and visit Amsterdam (again!), Utrecht and Maastricht in May, but that’s also not likely now. At the moment I’m satisfying my wanderlust by following lots of European travel Instagrammers, and using Google Maps and Earth to plan our next trips.

On the plus side of being in lockdown for a few weeks, I’ve pulled out the electric guitar again and found a suitable online course. Hopefully this time I have the patience to stick with it. I’m also studying for my ‘Amazon Web Services – Certified Solutions Architect’ exam, and playing around with Amazon’s managed machine learning service, AWS Sagemaker (expect a post on this later). I’ve also ordered a Dutch language audio course, so I can try speaking it on my next trip to Holland. Maybe a bonus of the lockdown will be brushing up on old skills, or learning new ones. We’re staying positive anyway and viewing this as an opportunity to learn.

My Favorite Reads of 2019

A round-up of the books I’ve read during 2019. I had a target of 15 books this year, and am currently reading number 13, so not a bad outcome.

Not in Your Lifetime: The Defining Book on the J.F.K. Assassination by Anthony Summers

I had wanted to read something else on the J.F.K assassination since I read Bill O’Reilly’s ‘Killing Kennedy’ a few years ago on holiday. This didn’t disappoint. This book is full of information I had previously not read, and provokes a lot of thought.

Gestapo: The Story Behind the Nazis Machine of Terror by Lucas Saul

A short read on the Gestapo. Easy to read, but a bit simplistic for my liking. Might serve as a general introduction to those unfamiliar, but I didn’t really learn anything I didn’t already know.

Blitzed: Drugs in Nazi Germany by Norman Ohler

I enjoyed this account of narcotic usage in German political circles during WW2 by German writer Ohler. The account of Hitler’s personal doctor and Hitler’s addictions was very interesting.

Stalingrad by Antony Beevor

The first book I’ve read by Beevor, and definitely not my last. I loved his style of writing and his ability to present the facts. This is the account of the Battle of Stalingrad during WW2, ultimately leading to the loss of the entire German 6th Army and the repercussions and impact on the war.

The Fault in Our Stars by John Green

A re-read of this John Green tragedy while on holiday in Germany in June.

Dead Wake: The Last Crossing of the Lusitania by Erik Larson

An account of the Lusitania’s last Atlantic crossing, weaving in stories of real people while portraying the political turmoil of WW1. I thought this was excellently written and it was one of the books of this year I couldn’t put down.

Forensics: The Anatomy of Crime by Val McDermid

An interesting read giving some background and a light introduction to the forensic sciences from a few different perspectives from crime writer McDermid. She interviews lots of experts in the different forensic fields which adds credibility to her points, and overall I enjoyed this one.

Chernobyl: History of a Tragedy by Serhii Plokhy

I wanted to read something on Chernobyl after watching the miniseries on HBO, and this is an excellently written account of the lead-up to the tragedy, and the after-effects which are still felt today.

The Fall of Berlin 1945 by Antony Beevor

I was a fan of Beevor’s after reading ‘Stalingrad’ earlier in the year. This is his account of the fall of Berlin and the last days of WW2 in Germany. It’s a bit slow to get to the actual point of the Battle of Berlin, but I guess all the lead-in information is necessary to set the scene and introduce the commanders on both sides. When it does get going it is very enjoyable, written in Beevor’s style which I really like.

The 5 AM Revolution: Why High Achievers Wake Up Early and How You Can Do It, Too by Dan Luca

This was a short read (about 150 pages) that I purchased at an airport in India while on a business trip in November. It is not that original in the ideas that it presents in my opinion but still an interesting read.

Apollo 11: The Inside Story by David Whitehouse

Not so much the story of Apollo 11, but the story of the entire Apollo Program and the race to beat the Russians. I really enjoyed this one, another one I found hard to put down.

The Witches: Salem, 1692 by Stacy Schiff

Repetitive, tedious, boring, confusing, would be just some of the words I would used to describe Shiff’s writing style in this account of the 1692 Salem Witch Trials. For a subject that I find very interesting, boy, did I struggle with this one. I wish I could say it was worth it, but unfortunately not.

Almost the Perfect Murder: The Killing of Elaine O’Hara, the Extraordinary Garda Investigation and the Trial That Stunned the Nation by Paul Williams

I am currently reading this account of the Elaine O’Hara murder.