Tagged: f1dj

A First Attempt at Wrangling WRC (World Rally Championship) Data With pandas and matplotlib

Last year was a quite year on the Wrangling F1 Data With R front, with a not even aborted start at doing a python/pandas equivalent project. With the new owners of F1 in place, things may change for the better in terms of engaging with fans and supporters, and I may revisit that idea properly, but in the meantime, I thought I started tinkering with a wider range of motorsport data.

The start to the BTCC season is still a few months away, but the WRC started over the weekend, and with review highlights and live coverage of one stage per rally on Red Bull TV, I thought I may give that data a go…

Results and timing info can be found on the WRC web pages (I couldn’t offhand find a source of official FIA timing sheets) so here’s a first quick sketch using stage results from the first rally of the year – Monte Carlo.

world_rally_championship_-_results_monte_carlo_-_wrc_com

To start with, we need to grab the data. I’m using the pandas library, which has a handy .read_html() method that can scrape tables (crudely) from an HTML page given its URL.

import pandas as pd

def getStageResultsBase(year,rallyid,stages):
    ''' Get stage results and overall results at end of stage '''
    
    # Accept one stage number or a list of stage numbers
    stages=[stages] if not isinstance(stages,list) else stages
    
    #There are actually two tables on the stage results page
    df_stage=pd.DataFrame()
    df_overallpart=pd.DataFrame()
    
    #Grab data for each stage
    for stage in stages:
        url='http://www.wrc.com/live-ticker/daten/{year}/{rallyid}/stage.{rallyid}.{stage}.all.html'.format(year=year, rallyid=rallyid, stage=stage)
        #scrape the data
        results=pd.read_html(url,encoding='utf-8')
        results[0].columns=['pos', 'carNo', 'driverName', 'time', 'diffPrev', 'diffFirst']
        results[1].columns=['pos', 'carNo', 'driverName', 'time', 'diffPrev', 'diffFirst']
        
        #Simple cleaning - cast the data types as required
        for i in [0,1]:
            results[i].fillna(0,inplace=True)
            results[i]['pos']=results[i]['pos'].astype(float).astype(int)
            for j in ['carNo','driverName','time','diffPrev','diffFirst']:
                results[i][j]=results[i][j].astype(str)
        
        #Add a stage identifier
        results[0]['stage']=stage
        results[1]['stage']=stage
        
        #Add the scraped stage data to combined stage results data frames
        df_stage=pd.concat([df_stage,results[0]])
        df_overallpart=pd.concat([df_overallpart,results[1]])

    return df_stage.reset_index(drop=True), df_overallpart.reset_index(drop=True)

The data we pull back looks like the following.

wrc_results_scraper1

Note that deltas (the time differences) are given as offset times in the form of a string. As the pandas library was in part originally developed for working with financial time series data, it has a lot of support for time handling. This includes the notion of a time delta:

pd.to_timedelta("1:2:3.0")
#Timedelta('0 days 01:02:03')

We can use this datatype to represent time differences from the results data:

#If we have hh:mm:ss format we can easily cast a timedelta
def regularTimeString(strtime):

    #Go defensive, just in case we're passed eg 0 as an int
    strtime=str(strtime)
    strtime=strtime.strip('+')

    modifier=''
    if strtime.startswith('-'):
        modifier='-'
        strtime=strtime.strip('-')

    timeComponents=strtime.split(':')
    ss=timeComponents[-1]
    mm=timeComponents[-2] if len(timeComponents)>1 else 0
    hh=timeComponents[-3] if len(timeComponents)>2 else 0
    timestr='{}{}:{}:{}'.format(modifier,hh,mm,ss)
    return pd.to_timedelta(timestr)

We can use the time handler to cast the time differences from the scraped data as timedelta typed data:

def getStageResults(year,rallyid,stages):
    df_stage, df_overallpart = getStageResultsBase(year,rallyid,stages)
    for col in ['time','diffPrev','diffFirst']:
        df_stage['td_'+col]=df_stage.apply(lambda x: regularTimeString(x[col]),axis=1)
        df_overallpart['td_'+col]=df_overallpart.apply(lambda x: regularTimeString(x[col]),axis=1)
    return df_stage, df_overallpart 

wrc_results_scraper2

The WRC results cover all entrants to the rally, but not all the cars are classed as fully blown WRC cars (class RC1). We can limit the data to just the RC1 cars and generate a plot showing the position of each driver at the end of each stage:

%matplotlib inline
import matplotlib.pyplot as plt

rc1=df_overall[df_overall['groupClass']=='RC1'].reset_index(drop=True)

fig, ax = plt.subplots(figsize=(15,8))
ax.get_yaxis().set_ticklabels([])
rc1.groupby('driverName').plot(x='stage',y='pos',ax=ax,legend=None);

wrc_results_scraper3

The position is actually the position of the driver across all entry classes, not just RC1. This means if a driver has a bad day, they could be placed well down the all-class field; but that’s not of too much interest if all we’re interested in is in-class ranking.,

So what about if we rerank the drivers within the RC1 class? And perhaps improve the chart a little by adding a name label to identify each driver at their starting position?

rc1['rank']=rc1.groupby('stage')['pos'].rank()

fig, ax = plt.subplots(figsize=(15,8))
ax.get_yaxis().set_ticklabels([])
rc1.groupby('driverName').plot(x='stage',y='rank',ax=ax,legend=None)

#Add some name labels at the start
for i,d in rc1[rc1['stage']==1].iterrows():
    ax.text(-0.5, i+1, d.ix(i)['driverName'])

wrc_results_scraper4

This chart is a bit cleaner, but now we lose information around the lower placed in-class drivers, in particular that information about  there overall position when other classes are taken into account too…

The way the FIA recover this information in their stage chart displays that reports on the evolution of the race for the top 10 cars overall (irrespective of class)  that shows excursions in interim stages outside the top 10  “below the line”, annotating them further with their overall classification on the corresponding stage.

stage_chart___federation_internationale_de_l_automobile

We can use this idea by assigning a “re-rank” to each car if they are positioned outside the size of the class.

#Reranking...
rc1['xrank']= (rc1['pos']>RC1SIZE)
rc1['xrank']=rc1.groupby('stage')['xrank'].cumsum()
rc1['xrank']=rc1.apply(lambda row: row['pos'] if row['pos']<=RC1SIZE else row['xrank'] +RC1SIZE, axis=1)
fig, ax = plt.subplots(figsize=(15,8))
ax.get_yaxis().set_ticklabels([])
rc1.groupby('driverName').plot(x='stage',y='xrank',ax=ax,legend=None)

#Name labels
for i,d in rc1[rc1['stage']==1].iterrows():
    ax.text(-0.5, d.ix(i)['xrank'], d.ix(i)['driverName'])
for i,d in rc1[rc1['stage']==17].iterrows():
    ax.text(17.3, d.ix(i)['xrank'], d.ix(i)['driverName'])

wrc_results_scraper5The chart now shows the evolution of the race for the RC1 cars, retaining the spaced ordering of the top 12 positions that would be filled by WRC1/RC1 cars if they were all placed above cars from other classes and then bunching those placed outside the group size. (Only 11 names are shown because one the entries retired right at the start of the rally.)

So for example, in this case we see how Neuvill, Hanninen and Serderidis are classed outside Lefebvre, who was actually classed 9th overall.

Further drawing on the FIA stage chart, we can label the excursions outside the top 12, and also invert the y-axis.

fig, ax = plt.subplots(figsize=(15,8))
ax.get_yaxis().set_ticklabels([])
rc1.groupby('driverName').plot(x='stage',y='xrank',ax=ax,legend=None);

for i,d in rc1[rc1['xrank']>RC1SIZE].iterrows(): ax.text(d.ix(i)['stage']-0.1, d.ix(i)['xrank'], d.ix(i)['pos'], bbox=dict( boxstyle='round,pad=0.3',color='pink')) #facecolor='none',edgecolor='black',
#Name labels
for i,d in rc1[rc1['stage']==1].iterrows(): ax.text(-0.5, d.ix(i)['xrank'], d.ix(i)['driverName']) for i,d in rc1[rc1['stage']==17].iterrows(): ax.text(17.3, d.ix(i)['xrank'], d.ix(i)['driverName'])
#Flip the y-axis plt.gca().invert_yaxis()

Lefebvre’s excursions outside the top 12 are now recorded and plain to see.

wrc_results_scraper6

We now have a chart that blends rank ordering with additional information showing where cars are outpaced by cars from other classes, in a space efficient manner.

PS as with Wrangling F1 Data With R, I may well turn this into a Leanpub book, this time exploring the workflow to markdown (and/or maybe reveal.js slides!) from Jupyter notebooks, rather than from RStudio/Rmd.

First Thoughts on Detecting Motorsport Safety Car Periods from Laptimes

Prompted by Markku Hänninen, I thought I’d have a quick look at estimating motorsport safety car laps from a set of laptime data. For the uninitiated, if there is a dangerous hazard on track, the race-cars are kept out while the hazard is cleared, but led around by a safety car that limits the pace. No overtaking is allowed for race position, but under certain regulations, lapped cars may unlap themselves. Cars may also pit under the safety car.

Timing sheets typically don’t identify safety car periods, so the question arises: how can we detect them?

One condition that is likely to follow is that the average pace of the laps under safety car conditions will be considerably slower than under racing conditions. A quick way of estimating the race pace is to find the fastest laptime across the whole of the race (or in an online algorithm, the fastest laptime to date).

With a lapTimes dataframe containing columns lap, rawtime (that is, raw laptime in seconds) and position (that is, the race position of the driver recording a particular laptime on a particular lap), we can easily find the fastest lap:

minl=min(lapTimes['rawtime'])

We can find the mean laptime per lap using ddply() to group around each lap:

ddply(lapTimes[c('lap', 'rawtime', 'position')], .(lap), summarise,
      mean_laptime=mean(rawtime) )

We can also generate a variety of other measures. For example, within the grouped ddply operation, if we divide the mean laptime per lap (mean(rawtime)) by the fastest overall laptime we get a normalised mean laptime based on the fastest lap in the race.

ddply(lapTimes[c('lap','rawtime')], .(lap), summarise,
      norm_laptime=mean(rawtime)/minl )

We might also normalise the leader’s laptime for each lap, on the basis that the leader will be the car most likely to be driving at the safety car’s pace. (The summarising is essentially redundant here because we only have one row per group.

ddply(lapTimes[lapTimes['position']==1, c('lap','rawtime')], .(lap), summarise,
      norm_leaders_laptime=mean(rawtime)/minl )

Using the normalised times, we can identify slow laps. For example, slow laps based on mean laptime. In this case, I am using a heuristic that says the laptime is a slow laptime if the normlised time is more than 1.3 times that of the fastest lap:

ddply(lapTimes[c('lap','rawtime')], .(lap), summarise,
      slow_lap_meanBasis= (mean(rawtime)/minl) > 1.3 )

If we assume that the first lap does not start under the safety car, we can then make a crude guess that a slow lap not on the first lap is a safety car lap.

However, this does not take into account things like sudden downpours or other changes to the weather or track conditions. In such a case it may be likely that the majority of the field pits, so we might want to have a detector that flags whether a certain number of cars have pitted on a lap, possibly normalised against the current size of the field.

laps=lapsData.df(2016,4)
pits=pitsData.df(2016,4)[c('lap','driverId')]
pits['pitstop']=T
lapTimes=merge(laps, pits, by=c('lap','driverId'), all.x=T)
lapTimes['pitstop']=!is.na(lapTimes['pitstop'])

#Count of stops per lap
ddply( lapTimes, .(lap), summarise, ps=sum(pitstop==TRUE) )

#Proportion of cars stopping per lap
ddply( lapTimes, .(lap), summarise, ps=sum(pitstop==TRUE)/length(pitstop) )

That said, under safety car conditions, many cars do also take the opportunity to pit. However, under sudden changes of weather condition, we might expect nearly all the cars to come in, even if it means doubling up. (So another detector for weather might be two cars in the same team, close to each other in terms of gap, pitting on the same lap, with the result that one will be queued behind the other.)

As and when I get a chance, I’ll try to add some sort of ‘safety car’ estimator to the Wrangling F1 Data With R book.

When Documents Become Databases – Tabulizer R Wrapper for Tabula PDF Table Extractor

Although not necessarily the best way of publishing data, data tables in PDF documents can often be extracted quite easily, particularly if the tables are regular and the cell contents reasonably space.

For example, official timing sheets for F1 races are published by the FIA as event and timing information in a set of PDF documents containing tabulated timing data:

R_-_Best_Sector_Times_pdf__1_page_

In the past, I’ve written a variety of hand crafted scrapers to extract data from the timing sheets, but the regular way in which the data is presented in the documents means that they are quite amenable to scraping using a PDF table extractor such as Tabula. Tabula exists as both a server application, accessed via a web browser, or as a service using the tabula extractor Java application.

I don’t recall how I came across it, but the tabulizer R package provides a wrapper for tabula extractor (bundled within the package), that lets you access the service via it’s command line calls. (One dependency you do need to take care of is to have Java installed; adding Java into an RStudio docker container would be one way of taking care of this.)

Running the default extractor command on the above PDF pulls out the data of the inner table:

extract_tables('Best Sector Times.pdf')

fia_pdf_sector_extract

Where the data is spread across multiple pages, you get a data frame per page.

R_-_Lap_Analysis_pdf__page_3_of_8_

Note that the headings for the distinct tables are omitted. Tabula’s “table guesser” identifies the body of the table, but not the spanning column headers.

The default settings are such that tabula will try to scrape data from every page in the document.

fia_pdf_scrape2

Individual pages, or sets of pages, can be selected using the pages parameter. For example:

  • extract_tables('Lap Analysis.pdf',pages=1
  • extract_tables('Lap Analysis.pdf',pages=c(2,3))

Specified areas for scraping can also be specified using the area parameter:

extract_tables('Lap Analysis.pdf', pages=8, guess=F, area=list(c(178, 10, 230, 500)))

The area parameter appears to take co-ordinates in the form: top, left, width, height is now fixed to take co-ordinates in the same form as those produced by tabula app debug: top, left, bottom, right.

You can find the necessary co-ordinates using the tabula app: if you select an area and preview the data, the selected co-ordinates are viewable in the browser developer tools console area.

Select_Tables___Tabula_concole

The tabula console output gives co-ordinates in the form: top, left, bottom, right so you need to do some sums to convert these numbers to the arguments that the tabulizer area parameter wants.

fia_pdf_head_scrape

Using a combination of “guess” to find the dominant table, and specified areas, we can extract the data we need from the PDF and combine it to provide a structured and clearly labeled dataframe.

On my to do list: add this data source recipe to the Wrangling F1 Data With R book…

Visualising F1 Stint Strategies

With the new F1 season upon us, I’ve started tinkering with bits of code from the Wrangling F1 Data With R book and looking at the data in some new ways.

For example, I started wondering whether we might be able to learn something interesting about the race strategies by looking at laptimes on a stint by stint basis.

To begin with, we need some data – I’m going to grab it directly from the ergast API using some functions that are bundled in with the Leanpub book…

#ergast functions described in: https://leanpub.com/wranglingf1datawithr/
#Get laptime data from the ergast API
l2=lapsData.df(2016,2)
#Get pits data from the ergast API
p2=pitsData.df(2016,2)

#merge pit data into the laptime data
l3=merge(l2,p2[,c('driverId','lap','rawduration')],by=c('driverId','lap'),all=T)

#generate an inlap flag (inlap is the lap assigned the pit time)
l3['inlap']=!is.na(l3['rawduration'])

#generate an outlap flag (outlap is the first lap of the race or laps starting from the pits
l3=ddply(l3,.(driverId),transform,outlap=c(T,!is.na(head(rawduration,-1))))

#use the pitstop flag to number stints; note: a drive through penalty increments the stint count
l3=arrange(l3,driverId, -lap)
l3=ddply(l3,.(driverId),transform,stint=1+sum(inlap)-cumsum(inlap))

#number the laps in each stint
l3=arrange(l3,driverId, lap)
l3=ddply(l3,.(driverId,stint),transform,lapInStint=1:length(stint))
l3=arrange(l3,driverId, lap)

The laptimes associated with the in- and out- lap associated with a pit stop add noise to the full lap times completed within each stint, so lets flag those laps so we can then filter them out:

#Discount the inlap and outlap
l4=l3[!l3['outlap'] & !l3['inlap'],]

We can now look at the data… I’m going to facet by driver, and also group the laptimes associated with each stint. Then we can plot just the raw laptimes, and also a simple linear model based on the full lap times within each stint:

#Generate a base plot
g=ggplot(l4,aes(x=lapInStint, y=rawtime, col=factor(stint)))+facet_wrap(~driverId)

#Chart the raw laptimes within each stint
g+geom_line()

#Plot a simple linear model for each stint
g+ geom_smooth(method = "lm", formula = y ~ x)

So for example, here are the raw laptimes, excluding inlap and outlap, by stint for each driver in the recent 2016 Bahrain Formual One Grand Prix:

bah_2-16_racestint_raw

And here’s the simple linear model:

bah_2-16_racestint_lm

These charts highlight several things:

  • trivially, the number and length of the stints completed by each driver;
  • degradation effects in terms of the gradient of the slope of each stint trace;
  • fuel effects- the y-axis offset for each stint is the sum of the fuel effect (as the race progresses the cars get lighter and laptime goes down more or less linearly) and a basic tyre effect (the “base” time we might expect from a tyre). Based on the total number of laps completed in stints prior to a particular stint, we can calculate a fuel effect offset for the laptimes in each stint which should serve to normalise the y-axis laptimes and make more evident the base tyre laptime.
  • looking at the charts as a whole, we get a feel for strategy – what sort of tyre/stint strategy do the cars start race with, for example; are the cars going long on tyres without much degradation, or pushing for various length stints on tyres that lose significant time each lap? And so on… (What can you read into/from the charts? Let me know in the comments below;-)

If we assume a 0.083s per lap fuel weight penalty effect, we can replot the chart to account for this:

#Generate a base plot
g=ggplot(l4,aes(x=lapInStint, y=rawtime+0.083*lap, col=factor(stint)))
g+facet_wrap(~driverId) +geom_line()

Here’s what we get:

f1_2016_bah_stint_raw_fc

And here’s what the fuel corrected models look like:

f1_2016_bah_stint_lm_fc

UPDATE: the above fuel calculation goes the wrong way – oops! It should be:

MAXLAPS=max(l4['lap'])
FUEL_PENALTY =0.083
.e = environment()
g=ggplot(l4,aes(x=lapInStint,y=rawtime-(MAXLAPS-lap)*FUEL_PENALTY,col=factor(stint)),environment=.e)

What we really need to do now is annotate the charts with additional tyre selection information for each stint.

We can also do a few more sums. For example, generate a simple average laptime per stint, excluding inlap and outlap times:

#Calculate some stint summary data
l5=ddply(l4,.(driverId,stint), summarise,
                               stintav=sum(rawtime)/length(rawtime),
                               stintsum=sum(rawtime),
                               stinlen=length(rawtime))

which gives results of the form:

          driverId stint   stintav stintsum stinlen
1          rosberg     1  98.04445 1078.489      11
2          rosberg     2  97.55133 1463.270      15
3          rosberg     3  96.15543  673.088       7
4          rosberg     4  96.32494 1637.524      17
5            massa     1  99.13600  495.680       5
6            massa     2 100.48300 2009.660      20
7            massa     3  98.77862 2568.244      26

It would possibly be useful to also compare inlap and outlaps somehow, as well as factoring in the pitstop time. I’m pondering a couple a possibilities for the latter :

  • amortise the pitstop time over the laps leading up to a pitstop by adding a pitsop lap penalty to each lap in that stint calculated as the pitstop time of the stint length of the laps in the stint leading up to the pitstop; this essentially penalises the stint that leads up to the pitstop as a consequence of forcing the pitstop;
  • amortise the pitstop time over the laps immediately following a pitstop by adding a pitsop lap penalty to each lap in that stint calculated as the pitstop time of the stint length of the laps in the stint following the pitstop; this essentially penalises the stint that immediately follows the pitstop, and discounts some of the benefit from the pitstop.

I haven’t run the numbers yet though, so I’m not sure how these different approaches will feel…

A Couple of Observations Around the F1 Qualifying Rule Change

A couple of quick sketches to demonstrate the effect of the recent rule change to F1 Qualifying…

First, session utilisation for qualifying during last year’s 2015 Bahrain Formula One Grand Prix :

F1_2015_BAH_Q_su

Cars are on track throughout qualifying with many drivers completing two stints per session, with two or three laps per stint in the first part of qualifying.

Here’s what qualifying look like this year:

F1_2016_BAH_Q_su
One or two single lap stints, in the main, in Q1 and single lap, single run outings in Q2. Q3 also saw half the runners only completing a single stint.

Earlier in the year, when planning for the new qualifying format was to-ing an fro-ing, Bernie Ecclestone made a comment that changes would not be possible because “his technical team cannot prepare the software and graphics in time”.

In part, that’s because the whole timing thing is a mess. The FIA time sheets for qualifying yesterday included lap times that fell outside the rolling guillotine. My qualifying session cut-off time code doesn’t really work any more because it doesn’t factor in the rolling cut-off time, but what it does show is that a time recorded on the time sheet for Perez would have made the cut-off, if Perez hadn’t completed the lap too late.

bah16_q1-cutoff

Although recorded later in the session, Hulkenberg’s time was allowed, because he hadn’t yet been excluded.

To the extent that the new qualifying format was supposed to mix-up the grid, this sort of worked – Perez will be starting lower down the grid than he would have under last year’s qualifying format.

For more charts from this week’s race, see f1datajunkie.com. To learn how to create the charts, see Wrangling F1 Data With R.

Using Regular Expressions to Filter Tweets Based on the Number of Times a Pattern Appears Within Them

Every so often, it’s good to get a little practice in  using regular expressions… Via the wonderful F1 Metrics blog, I noticed that the @f1debrief twitter account had been tweeting laptimes from the F1 testing sessions. The data wasn’t republished in the F1metrics blog (though I guess I could have tried to scrape it from the charts) but it still is viewable on the @f1debrief timeline, so I grabbed the relevant tweets using the Twitter API statuses/user_timeline call:

response1 = make_twitter_request(twitter_api.statuses.user_timeline, 
                                screen_name='f1debrief',count=200,exclude_replies='true',trim_user='true',
                               include_rts='false',max_id='705792403572178944')
response2 = make_twitter_request(twitter_api.statuses.user_timeline, 
                                screen_name='f1debrief',count=200,exclude_replies='true',trim_user='true',
                               include_rts='false',max_id=response1[-1]['id'])
tweets=response1+response2

The tweets I was interested in look like this (and variants thereof):

F1_Debrief___f1debrief____Twitter

The first thing I wanted to do was to limit the list of tweets I’d grabbed to just the ones that contained a list of laptimes. The way I opted to do this was to create a regular expression that spotted patterns of the form N.NN, and then select tweets that had three or more instances of this pattern. The regular expression .findall method will find all instances of the specified pattern in a string and return them in a list.

import re

regexp = re.compile(r'\d\.\d')

#reverse the order of the tweets so they are in ascending time order
for i in tweets[::-1]:
    if len(re.findall(regexp, i['text']))&amp;gt;=3:
        #...do something with the tweets containing 3 or more laptimes

Inspecting several of the timing related tweets, they generally conform to a pattern of:

  • first line: information about the driver and the tyres (in brackets)
  • a list of laptimes, each time on a separate line;
  • an optional final line that typically started with a hashtag

We can use a regular expression match to try to pull out the name of the driver and tyre compound based on a common text pattern:

#The driver name typically appears immediately after the word του
#The tyre compound appears in brackets
regexp3 = re.compile('^.* του (.*).*\s?\((.*)\).*')
#I could have tried to extract drivers more explicitly from a list of drivers names I knew to be participating

#split the tweet text by end of line character
lines=i['text'].split('\n')

#Try to pull out the driver name and tyre compound from the first line
m = re.match(regexp3, lines[0])
if m:
    print('',m.group(1).split(' ')[0],'..',m.group(2))
    #There is occasionally some text between the driver name and the bracketed tyre compound
    #So split on a space and select the first item
    dr=m.group(1).split(' ')[0]
    ty=m.group(2)
else:
    dr=lines[0]
    ty=''

For the timings, we need to do a little bit of tidying. Generally times were of the form N:NN.NN, but some were of the form NN.NN. In addition, there were occasional rogue spaces in the timings. In this case, we can use regular expressions to substitute on a particular pattern:

for j in lines[1:]:
            j=re.sub('^(\d+\.)','1:\\1',j)
            j=re.sub('^(\d)\s?:\s?(\d)','\\1:\\2',j)

The final code can be found in this gist and gives output of the form:

f1test2016__txt_and_notebooks

There are a few messed up lines, as the example shows, but these are easily handled by hand. (There is often a trade-off between fully automating and partially automating a scrape. Sometimes it can be quick just to do a final bit of tidying up in a text editor.) In the example output, I also put in an easily identified line (starting with == that shows the original first line of a tweet (it also has the benefit of making it easy to find the last line of the previous tweet, just in case that needs tidying too…) These marker lines can easily be removed from the file using a regular expression pattern as the basis of a search and replace (replacing with nothing to delete the line).

So that’s three ways of using regular expressions – to count the occurrences of a pattern and use that as the basis of a filter; to extract elements based on pattern matching in a string; and as the basis for a direct pattern based string replace/substitution.