Fifteen years or so ago, now, I worked on an OU short course – T184: Robotics and the Meaning Life. The course took a broad view of robotics, from the technical (physical design, typical control systems, relevant AI – artificial intelligence – techniques (and their limitations), through the social and political consequences. The course also included the RobotLab simulator, which could be used to programme a simple 2D robot, or a HEK – a self-purchased Home Experiment Kit in the form of Lego Mindstorms.
The course was delivered as part of the Technology Faculty Relevant Knowledge programme, originally championed by John Naughton. There’s a lot folk don’t know – or understand – about how the technology world works, and the Relevant Knowledge programme helped address that. The courses were for credit, 10 CAT points at level 1, and were fairly priced: 100 hours for a hundred and fifty quid, and 10 CAT points as a bonus.
One of the things I was keen to put in T184 was a section on robot law, which complemented a section on “robot rights”; this reviewed laws that had been applied to slaves, children, animals and the mentally infirm, “sentient creatures”, in other words, whose behaviour or actions might be the responsibility of someone else, and asked whether such laws might be a useful starting point for legislating around the behaviour of intelligent, self-adaptive robots and their owners / creators. The course also drew on science fiction depictions of robots, making the case that while positronic brains were a fiction, the “Three Laws” that they implemented could be seen as useful design principles for robot researchers:
whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;
That phrase does not come from the course, but it does appear in a draft report, published in May this year, from the European Parliament Committee on Legal Affairs [2015/2103(INL)]. The report includes “recommendations to the Commission on Civil Law Rules on Robotics” and, for the EU at least, perhaps acts as a starting pistol for a due consideration of what I assume will come to be referred to as “robot law”.
As well as considering robots as things deserving of rights that could be subjugated, I’d also explored the extent to which robots might be treated as “legal entities” in much the way that companies are legal entities, although I’m not sure that ever made it into the course.
whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage;
Again – that’s the EU report from a couple of months ago. So what exactly is it proposing, and what does it cover? Well, the report:
Calls on the Commission to propose a common European definition of smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot:
- acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data
- is self-learning (optional criterion)
- has a physical support
- adapts its behaviours and actions to its environment;
So not software robots, then? (Which raises a question – how might adaptive algorithms be regulated, and treated under law? Or algorithms that are manifest via “human” UIs, such as conversational chatbots?) Or would such things be argued as having “physical support”?
Hmmm… because whilst the report further notes :
… that there are no legal provisions that specifically apply to robotics, but that existing legal regimes and doctrines can be readily applied to robotics while some aspects appear to need specific consideration;
which is fine, it then seems to go off at a tangent as it:
calls on the Commission to come forward with a balanced approach to intellectual property rights when applied to hardware and software standards, and codes that protect innovation and at the same time foster innovation;
I can see the sense in this, though we maybe need to think about IPR of control models arising from the way an adaptive system is trained, compared to the way it was originally programmed to enable it to be trained and acquire it’s own models, particularly where a third party, rather than a manufacturer, does the training, but then the report seems to go off the rails a bit as it:
calls on the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots;
That last sentence surely suggests that they’re talking about algorithms rather than robots? Or are they saying that if I write an adaptive computer program that generates a PNG, it’s not copyrightable, but if I program an adaptive robot with a pen on its back and it draws a picture, that is copyrightable? (I can see the IPR issues here may get a bit messy, though presumably contacts and licenses associated with collaborative generative systems already start to address this?)
The report then seems to go off on another tangent, as it:
Points out that the use of personal data as a ‘currency’ with which services can be ‘bought’ raises new issues in need of clarification; stresses that the use of personal data as a ‘currency’ must not lead to a circumvention of the basic principles governing the right to privacy and data protection;
I’m not sure I see how that’s relevant here? There then follows a few sections relating to specific sorts of robot (autonomous cars, medial robots, drones) before addressing employment issues:
Bearing in mind the effects that the development and deployment of robotics and AI might have on employment and, consequently, on the viability of the social security systems of the Member States, consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;
So…. robots on the workforce means you have to pay a national insurance contribution for what? FTE human jobs replaced? But there’s also a call for a general basic income?!
Then we return to what I thought the report was about – liability:
Considers that robots’ civil liability is a crucial issue which needs to be addressed at EU level so as to ensure the same degree of transparency, consistency and legal certainty throughout the European Union for the benefit of consumers and businesses alike;
Considers that, in principle, once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be; notes, in particular, that skills resulting from ‘education’ given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot’s harmful behaviour is actually due;
The current recommendation appears to be that liability issues be addressed via a compulsory insurance scheme:
Points out that a possible solution to the complexity of allocating responsibility for damage caused by increasingly autonomous robots could be an obligatory insurance scheme, as is already the case, for instance, with cars; notes, nevertheless, that unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics could be based on the obligation of the producer to take out an insurance for the autonomous robots it produces;
which is fine, and other paragraphs explore that further; but then the report goes off on one again:
creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;
Which is firmly in the territory I wanted to explore in T184 way back when. For example, is the suggestion that we have some sort of “Intelligent Robot/Algorithm Capacity Act”, akin to the 2005 Mental Capacity Act perhaps?! Or is it more akin to corporate liability which seems to be under-legislated? And here’s where I start to wonder – where do you distinguish between robots as autonomous things that are legislated against, algorithms as autonomous things that are legislated against, sets of interacting algorithms creating complex adaptive systems as autonomous things that are legislated against, complex adaptive systems such as companies that are legislated against, and so on… (I maybe need to read Iain M. Banks’ sci-fi books about The Culture again!)
The report then goes on to suggest a draft Code of Ethical Conduct for Robotics Engineers, a Licence for Designers and a Licence for Users. But not a Licence for Robots themselves. Nor any mention of the extent to which the built environment should be made accessible for mobile robots. (“Robot accessibility” was another thing I was interested in!;-)
Another document that came out recently hails from the DfT’s Centre for Connected and Autonomous Vehicles is a consultation (in the UK) around Advanced driver assistance systems and automated vehicle technologies: supporting their use in the UK [Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies – PDF]. Apparently:
The only immediate change that we have identified primary legislation that is required now is to update our insurance framework. This will give insurers and manufacturers time to consider what insurance products can come to market in time for when this technology arrives.
This reflects the likely short term arrival of “motorway assist systems for travel on high speed roads (i.e. motorways and major trunk roads); and remote control parking”. Platooning trials are also to take place.
For the longer term, the report distinguishes between “conventional driving, assisted driving and fully automated driving”:
The consultation doc is worth reading in full, but here are a couple of points that jumped out at me:
a vehicle owner who is ‘driving’ the highly automated vehicle might have legitimately disengaged from the driving task, with the vehicle having taken over control. If the technology fails and injures the ‘driver’, the current legislation only requires insurance to cover third parties and not the driver. It is up to the policy owner to seek additional insurance to cover any injury they do to themselves as a result of their own actions or negligence. If the AVT fails then the driver, in effect, becomes a victim as their injuries are not as a result of their own actions or negligence. We therefore need to protect the driver as a potential victim.
So you’ll need to insure yourself against the car?
The last line of this amused me:
We have considered whether a different definition of ‘user’ is needed in the Road Traffic Act for automated vehicles for the purposes of insurance obligation. For the first generation of AVT (where the driver is only ‘hands-off’ and ‘eyes-off’ for parts of the journey) we think that the driver falls under the current definition of a ‘user’. Once fully automated vehicles are available – which would drive themselves for the entire journey – it might be more appropriate to put the insurance obligation solely on the registered keeper.
“Registered keeper”. This may well be the current wording relating to vehicle ownership, but it made me think of a wild animal keeper. So harking back to Robot Law, would it be worth looking at the Dangerous Wild Animals Act 1976 or the Dangerous Dogs Act 1991? (Hmm… code sharing libraries, model sharing algorithms – “breeding” new code from old code…!)
We are not currently proposing any significant change in our rules on liability in road traffic accidents to reflect the introduction of automated cars. We still think a fault based approach combined with existing product liability law, rather than a new strict liability regime, is the best approach for our legal system. We think that the existing common law on negligence should largely be able to adapt to this new technology.
So the car won’t be a legal entity in its own right… though I wonder if a class of vehicles running under the same model/operating system would under the EU approach hinted at above?
If you were of suspicious mind, you might think that there could be an ulterior motive for pushing forward various forms of automative automation…
Data will clearly be required to determine whether the driver or the vehicle was responsible for any collision, such as establishing who was in control at the time of the incident. This is likely to come from in-vehicle data recorders. Many vehicles already have data recorders fitted, although the data collected is not accessible without special equipment.
We expect that the out-of-the-loop motorway driving vehicles that are coming to market soon will have an event data recorder fitted. There are inevitably different views as to what data is essential and of course data protection and privacy considerations are important. It seems likely that data recorders would be regulated on an international basis, like most vehicle technologies. We will participate fully in this debate, equipped with views from the UK manufacturing and insurance industries, evidence from the various trials taking place and the first automated technologies that are coming to market.
Presumably, it’s easiest to just make everyone install a box…. (see Geographical Rights Management, Mesh based Surveillance, Trickle-Down and Over-Reach and Participatory Surveillance – Who’s Been Tracking You Today? for examples of how you can lose control of your car and/or data…) That said, boxes can be useful for crash investigations, and may be used in the defense of the vehicle’s actions, or perhaps in its praise: Tesla’s Autopilot May Have Saved A Life.
The following just calls out to be gamed – and also raises questions around updates, over-the-air or via a factory recall…
We do not think an insurer should be able to avoid paying damages to a third party victim where an automated vehicle owner fails to properly maintain and update the AVT or attempts to circumvent the AVT in breach of their insurance policy. Nor do we think that an insurer should be able to avoid paying damages to a third party victim if the vehicle owner or the named drivers on the policy attempt to use the vehicle inappropriately.
The following point starts to impinge on things like computer misuse as well as emerging robot law?
If an accident occurred as a result of an automated vehicle being hacked then we think it should be treated, for insurance purposes, in the same way as an accident caused by a stolen vehicle. This would mean that the insurer of the vehicle would have to compensate a collision victim, which could include the ‘not at fault driver’ for damage caused by hacking but, where the hacker could be traced, the insurer could recover the damages from the hacker.
In respect of the following point, I wonder of the many products we buy at the moment, how many of them integrate statistical computational models (rather than just rely on physics!)? Is the whole “product liability” thing due a review in more general terms?!
Currently the state of the art defence (section 4(1)(e) of the Consumer Protection Act 1987) provides a defence to product liability if, at the time the product was in the manufacturer’s control, the state of scientific and technical knowledge was not such that a manufacturer could have been expected to discover the defect. We could either leave manufacturers’ liability and product liability as it currently is or, instead, extend the insurance obligation to cover these circumstances so that the driver’s insurance would have to cover these claims.
To keep tabs on the roll out of autonomous vehicles in the UK, see the Driverless vehicles: connected and autonomous technologies policy area.
PS via Ray Corrigan, some interesting future law workshops under the banner Geek Law: Gikll 2013, Gikll 2014, Gikll 2015. The 2016 programme (for the London event, Sept 30) is available in an unreadable font here: Gikll 2016 programme.
A few months ago, I noticed that the Google geolocation service would return a lat/long location marker when provided with the MAC address of a wifi router (Using Google to Look Up Where You Live via the Physical Location of Your Wifi Router [code]) and in various other posts I’ve commented on how communities of bluetooth users can track each other’s devices (eg Participatory Surveillance – Who’s Been Tracking You Today?).
Which got me wondering… are there any apps out there that let me detect the MAC address of Bluetooth devices in my vicinity, and is there anyone aggregating the data, perhaps as a quid pro quo for making such an app available?
Seems like the answer is yes, and yes…
For example, John Abraham’s Bluetooth 4.0 Scanner [Android] app will let you [scan] for Bluetooth devices… The information is recorded includes: device name, location, RSSI signal strength, MAC address, MAC address vendor lookup.
In a spirit of sharing, the Bluetooth 4.0 Scanner app “supports the earthping.com project – crowdsourced Bluetooth database. Users are also reporting usage to find their lost Bluetooth devices”.
So when you run the app to check the presence of Bluetooth devices in your own vicinity, you also gift location of those devices – along with their MAC addresses – to a global database – earthping. Good stuff…not.
We’re all familiar (at least in the UK) with surveillance cameras everywhere, and as object recognition and reconciliation tools improves it seems as if tracking targets across multiple camera views will become a thing, as demonstrated by the FX Pal Dynamic Object Tracking System (DOTS) for “office surveillance”.
It’s also increasingly the case that street furniture is appearing that captures the address of our electronic devices as we pass them. For example, in New York, Link NYC “is a first-of-its-kind communications network that will replace over 7,500 pay phones across the five boroughs with new structures called Links. Each Link will provide superfast, free public Wi-Fi, phone calls, device charging and a tablet for Internet browsing, access to city services, maps and directions”. The points will also allow passers-by to ‘view public service announcements and more relevant advertising on two 55” HD displays’ – which is to say they track everything that passes, tries to profile anyone who goes online via the service, and then delivers targeted advertising to exactly the sort of people passing each link.
LinkNYC is completely free because it’s funded through advertising. Its groundbreaking digital OOH advertising network not only provides brands with a rich, context-aware platform to reach New Yorkers and visitors, but will generate more than a half billion dollars in revenue for New York City.
So I wondered just what sorts of digital info we leak as we do walk down the street. Via Tracking people via WiFi (even when not connected), I learn that devices operate in one of two modes – a listening beacon mode, where they essentially listening for access points, but at high battery cost. Or a lower energy ping mode, where they announce themselves (along with MAC address) to anyone who’s listening.
If you want to track passers-by, many of whom will be pinging their credentials to anyone whose listening, you can set up things like wifi routers in monitor mode to listen out for – and log – such pings. Edward Keeble describes how to do it in the post Passive WiFi Tracking…
If you’d rather not hack together such a device yourself, you can always buy something off the shelf to log the MAC addresses of passers-by, eg from somebody such as Libelium’s Meshlium Scanner [datasheet – PDF]. So for example:
- Meshlium Scanner AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. This model can receive and store data from Waspmotes with GPRS, 3G or WiFi, sending via HTTP protocol. The collected data can be send (sic) to the Internet by using the Ethernet.
- Meshlium Scanner 3G/GPRS-AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. This model can receive and store data from Waspmotes with GPRS, 3G or WiFi, sending via HTTP protocol. The collected data can be send (sic) to the Internet by using the Ethernet, and 3G/GPRS connectivity
- Meshlium Scanner XBee/LoRa -AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. It can also capture the sensor data which comes from the Wireless Sensor Network (WSN) made with Waspmote sensor devices. The collected data can be send (sic) to the Internet by using the Ethernet and WiFi connectivity.
So have any councils started installing that sort of device I wonder? And if so, on what grounds?
On the ad-tracking/marketing front, I’m also wondering whether there are extensions to cookie matching services that can match MAC addresses to cookies?
PS you know that unique tat you’ve got?! FBI Develops tattoo tracking technology!
PPS capturing data from wifi and bluetooth devices is easy enough, but how about listening out for mobile phone as phones? Seems that’s possible too, though perhaps not off-the-shelf for your everyday consumer…? What you need, apparently, is an IMSI catcher such as the Harris Corp Stingray. Examples of use here and here.
Pondering the extent to which Jupyter notebooks provide an accessible UI, I had a naive play with the Mac VoiceOver app run over Jupyter notebooks the other day: markdown cells were easy enough to convert to speech, but the code cells and their outputs are nested block elements which seemed to take a bit more navigation (I think I really need to learn how to use VoiceOver properly for a proper test!). Suffice to say, I really should learn how to use screen-reader software, because as it stands I can’t really tell how accessible the notebooks are…
A quick search around for accessibility related extensions turned up the jupyter-a11y: reader extension [code], which looks like it could be a handy crib. This extension will speak aloud a the contents of a code cell or markdown cell as well as navigational features such as whether you are in the cell at the top or the bottom of the page. I’m not sure it speaks aloud the output of code cell though? But the code looks simple enough, so this might be worth a play with…
On the topic of reading aloud code cell outputs, I also started wondering whether it would be possible to generate “accessible” alt or longdesc text for matplotlib generated charts and add those to the element inserted into the code cell output. This text could also be used to feed the reader narrator. (See also First Thoughts on Automatically Generating Accessible Text Descriptions of ggplot Charts in R for some quick examples of generating textual descriptions from matplotlib charts.)
Another way of complementing the jupyter-a11y reader extension might be to use the python pindent [code] tool to annotate the contents of code cells with accessible comments (such as comments that identify the end of if/else blocks, and function definitions). Another advantage of having a pindent extension to annotate the content of notebook python code cells is that it might help improve the readability of code for novices. So for example, we could have a notebook toolbar button that will toggle pindent annotations on a selected code cell.
For code read aloud by the reader extension, I wonder if it would be worth running the content of any (python) code cells through pindent first?
PS FWIW, here’s a related issue on Github.
PPS another tool that helps make python code a bit more accessible, in an active sense, in a Jupyter notebook is this pop-up variable inspector widget.
Pokemon Go seems to have hit the news this week – though I’m sure for anyone off social media last week and back to it next week, the whole thing will have completely passed them by – demonstrating that augmented reality apps really haven’t moved on much at all over the last five years or so.
But notwithstanding that, I’ve been trying to make sense of a whole range of mediated reality technologies for myself as prep for a very short unit on technologies and techniques on that topic.
Here’s what I’ve done to date, over on the Digital Worlds uncourse blog. This stuff isn’t official OU course material, it’s just my own personal learning diary of related stuff (technical term!;-)
- Blurred Edges – Dual Reality: intro post, setting the scene and introducing the worlds of mediated and augmented reality;
- Introducing Augmented Reality Apparatus – From Victorian Stage Effects to Head-Up Displays: some background history of a related effect – Pepper’s Ghost – and a look at modern day head-up displays that make use of it;
- The Art of Sound – Algorithmic Foley Artists? An aside reviewing the work of foley artists, and asking whether machines could ever replace them…
- Taxonomies for Describing Mixed and Alternate Reality Systems: some vocab to help us talk about the components that make up a mediated reality system;
- Augmenting Reality With Digital Overlays: examples of how augmented reality can be used to overlay the physical world with digital artefacts;
- Real or Virtual Objects? A bit more vocab – what’s real and what’s virtual?
- “Magic Lenses” and See-Through Displays: examples of “magic lens” style augmented reality;
- Noise Cancellation – An Example of Mediated Audio Reality? Augmented/mediated reality isn’t just about what you can see…
More to come over the next couple of weeks or so. If you want to comment, and perhaps influence the direction of my meanderings, please feel free to do that here or on the relevant post.
“Eating your own dogfood”, aka dogfooding, refers the practice of a company testing it’s own products by using them internally. At a research day held by Somerset College, a quote in a talk by Lorna Sheppard on Len Deighton’s cookbooks (yes, that Len Deighton…) from a 2014 Observer magazine article (Len Deighton’s Observer cookstrips, Michael Caine and the 1960s) caught my attention:
[G]enerally, you stand a better chance of succeeding in something if whatever you create, you also like to consume.
Implicit in this is the idea that you are also creating for a purpose.
In the OU engineering residential school currently running at the University of Bath, one of the four day long activities the students engage with is a robotics activity using Lego EV3 robots, where at each stage we try to build in a reason for adding another programming construct or learning how to work with a new sensor. That is, we try to motivate the learning by making it purposeful.
The day is structured around a series of challenges that allow students to develop familiarity with programming a Lego EV3 robot, adding sensors to it, logging data from the sensors and then interpreting the data. The activities are contextualised by comparing the work done on the Lego EV3’s with the behaviour of a Roomba robot vacuum cleaner – by the end of the morning, students will have programmed their robot to perform the majority of the Roomba’s control functions, including finding it’s way home to a homing beacon, as well as responding to touch (bumper), colour (line stopper) and proximity (infra-red and ultrasonic) sensors.
The day concludes with a challenge, where an autonomous robot must enter – and return from – a closed tunnel network, using sensors to collect data about the internal structure of the tunnel, as well identifying the location of a casualty who has an infra-red emergency beacon with them.
(The lids are placed on the tunnels so the students can’t see inside.)
As well as the partition walls (which are relocated each time the challenge is run, so I’m not giving anything away!), pipework and cables (aka coloured tape) also run through the tunnel and may be mapped by the students using a downward facing light sensor.
The casualty is actually a small wooden artist’s mannequin – the cuddly teddy we used to use does not respond well to the ultrasound sensor the students use to map the tunnel.
The data logged by the students include motor rotation data to track the robots progress, ultrasonic sensor data to map the walls, infra-red sensor data to find the emergency beacon and a light sensor to identify the cables/pipework.
The data collected looks something like this:
The challenge is then to map the (unseen by the students) tunnel network, and tell the robot’s story from the data.
The result is a narrative that describes the robot’s progress, and a map showing the internal structure of the tunnel:
If time allows, this can then be used as the basis for programming the robot to complete a rescue mission!
The strategies used by the students to log the data, and control the robot to send it into the tunnel and retrieve it safely again, are based on what they learned completing the earlier challenges set throughout the day.
Via F1 journalist James Allen’s blog (Insight: Inside McLaren’s Secretive F1 Openerations Room, “Mission Control”), I learn that the wheel hub of McLaren’s latest MP4-31 Formula One car hacks its own data. According to McLaren boss, Ron Dennis:
Each wheel hub has its own processing power, we don’t even take data from the sensors that surround the wheel [that measure] brake temperatures, brake wear, tyre pressures, G-Forces – all of this gets processed actually in the wheel hub – it doesn’t even get transmitted to the central ECU, the Electronic Control Unit.
If driver locks a brake or the wheel throws itself out of balance, we’re monitoring the vibration that creates against a model that says, “if the driver continues with this level of vibration the suspension will fail”, or the opposite, “we can cope with this vibration”.
With artificial intelligence and machine learning modeling now available as a commodity service, at least for connected devices, it’ll be interesting to see what the future holds for intelligence at the edge – sensors that don’t just return data (“something moved” from a security sensor, but that return information (“I just saw a male, 6′, blue trousers, green top, leaving room 27 and going to the water cooler; it looked like… etc etc..”)
Of course, if you’re happy with your sensors just applying a model, rather than building one, that appears to be the case for the MP4-31 wheel hub, it seems that you can already do that at the 8 bit level using Deep Learning, as described by Pete Warden in How to Quantize Neural Networks with TensorFlow.
By the by, if you want to have a quick play with a TensorFlow learner, check out the TensorFlow Neural Network Playground. Or how about training a visual recognition system with IBM’s Visual Recognition Demo?
Noticing that Alan just posted a Little Web Inspector / CSS Trick for extracting logos from web pages, here’s one for cleaning up ads from a web page you want to grab a screen shot of.
For example, I often take screenshots of new web pages for adding to “topical timeline” style presentations. As a reference, I often include the page URL from the browser navigation bar and the newspaper banner. But some news sites have ads at the top that you can’t scroll away:
Using a browser’s developer tools, you can “live edit” the page HTML in the browser – first select the element you want:
then delete it…
If that doesn’t do the trick, you can always edit the HTML directly – or modify the CSS:
With a bit of tinkering, you can get a version of the page that you can get a clean screenshot of…
By editing the page HTML, you can also create you own local graffiti to web pages to amuse yourself and pass away the time…!;-)
For example, here’s me adding a Brython console to a copy of the OU home page in my browser…
This is purely a local copy, but functional nonetheless. And a great way of demonstrating to folk how you’d like a live web page to actually be, rather than how it currently is!-)