Samsung Galaxy Fold: Broken screens delay launch

The Samsung Galaxy Fold was supposed to be released on 26 April

Samsung has postponed the release of its folding smartphone, days after several early reviewers said the screens on their devices had broken.

The company said it had delayed the launch of the Galaxy Fold to “fully evaluate the feedback and run further internal tests”.

In April, several early reviewers found the display on the Galaxy Fold broke after just a few days.

Samsung has not said when the £1,800 device will go on sale.

A new launch date will be announced in the “coming weeks”.

In a statement, Samsung said it suspected the damage experienced by some of the reviewers was caused by “impact on the top and bottom exposed areas of the hinge”.

It also said it found “substances” inside one of the review devices that may have affected its performance.

Launch events due to take place in Hong Kong and Shanghai this week have also been postponed.

Samsung Galaxy Fold: Broken screens delay launch
WATCH: Hands-on with the Samsung Galaxy Fold

The Galaxy Fold was due to be released in the United States on 26 April, and in the UK on 3 May.

The South Korean tech giant has said it is investigating what went wrong with the broken review units.

In some cases, reviewers had peeled off a layer of the screen’s coating, mistaking it for a disposable screen protector.

“We will also enhance the guidance on care and use of the display including the protective layer,” Samsung said in a statement.

Chinese rivals Huawei and Xiaomi are also developing foldable smartphones, but neither company has announced a release date yet.

An alternative way to capture childhood on your phone

Parents are desperate to record childhood memories and the smartphone has allowed them to do this like never before. But what is the best way to go about it?

If all the videos you took of your children growing up were damaged and you could keep only the pictures or the sound, which would it be?

I liked to tantalise myself with this question before I had children, and I imagined surprising people by saying that – despite being a video journalist – I would choose the sound.

There is something more evocative about it, particularly the voice. To hear again a deceased relative, for example, is more arresting to me than to see a picture or silent video.

However, what I’ve actually found since becoming a parent is that there is another way of recording the fleeting moments of childhood, the results of which are more precious to me than either video or sound.

My preferred method still involves the smartphone, but it is focused on the power of words.

To explain the inspiration behind my method I need to recall my own childhood.

When I was around 10 or 11 years old, I became intrigued by a book I found on my parents’ bookshelf.

It was called Conversations with Children, an anthology of transcripts made by a child psychologist called R D Laing, who recorded what his children had said.

It was full of all the wonderful, crazy, uninhibited ideas you might expect. It was both entertaining and thought-provoking because Laing took the chance to explain the common patterns of emotional and intellectual development experts find in children.

He explained how during childhood we gradually come to understand concepts that determine our place in the world and what is possible within it: size, geography, time, empathy, ownership, societal norms, death.

I determined that when I had children I would record something similar myself.

Laing
R. D. Laing’s ‘Conversations with Children’ was a bestseller first published in 1978

In 2009 I acquired my first child and Steve Jobs’s third iPhone,

So I was part of the first generation of parents to have easy access to a stills camera, video recorder, audio recorder and digital notepad all in one handy device.

The early iPhones, having a fairly low resolution, didn’t capture video very well. But in any case, when my daughter started to speak her first words, I found that I had a strong impulse to write down what she said rather than film her – remembering Laing’s inspirational book.

To begin with, I wrote her early words in an ornate, hardback book that I bought specially for the purpose, befitting the words’ importance, I thought.

But this presented problems. I soon became worried about losing it. And it took time to find it when there was something to write down, meaning I might forget what had been said in the meantime.

I found it more convenient to write down the words in the Notes app of my iPhone, which I could always whip out of my pocket. Once a month or so I could email the notes so I had a back-up copy. Later, cloud computing would help.

iPhone pic

I had never in my life kept a diary, but suddenly it felt vital to record the experiences unfolding around me as accurately as possible.

Some pitfalls immediately became apparent on appointing myself the family’s digital scribe and archivist.

My fumbling on the phone was sometimes misconstrued as untimely and indulgent internet surfing – an injustice when I was actually engaged in the noble task of recording events for posterity. You have to disengage temporarily from family life to make a decent stab of recording it accurately.

iPhone

Of course I wanted to keep as accurate a record as possible.

But can words, recalled by a human, be as reliable as recorded video or sound?

One thing I’ve found from hours spent filming and recording audio at work as a BBC News video features journalist is that the most poignant moments are very difficult to capture.

You are lucky to have the mechanical equipment on and recording during that telling event that unfolded so quickly around you.

But by using that capturing device that is always on but invisible, known as our memory, any event, any candid, revelatory moment that unfolded suddenly out of the mundane, can be recorded and cherished.

The trade-off is you lose the 100% mechanical guarantee of accuracy.

There have been times when something wonderful was said so perfectly by my children, that I was determined to write down their precise words at the first opportunity.

Unnatural reactions

But inevitably I would be confounded by a thousand preoccupations that looking after children throws your way, resulting in the mental agony that I couldn’t guarantee to myself that I’d recorded the words correctly.

Longer, drawn-out conversations, of course – like an argument between two siblings over who has the larger spoon at breakfast – can’t, unfortunately, be recorded verbatim.

Another issue I’ve encountered at work is the reaction of humans to being recorded.

As soon as the red light is on and the subject is invited to speak, everybody to some extent acts unnaturally, from the member of the public (nervous) to the media-trained professional (too polished, verbose and over-confident).

The most revealing comments – even when a story is completely uncontroversial – are made off-camera, when a person is relaxed and has forgotten the recording device is there.

This doesn’t matter for the uninhibited toddler. But certainly from around five years old, a child has developed enough self-consciousness and a sense of identity to change behaviour when they realise they are being filmed for others and for posterity.

Notes

You can tell this from the way they now produce a staged smile for a photograph.

This phase of child development is recorded in my own notes.

I begin to find references, from around the age of five, to the whole note-recording process, including requests for things to be written down because the subjects themselves realise what they have said is funny or otherwise noteworthy.

Occasionally there is an objection to the whole enterprise, for being embarrassing or boring.

Shadows

Of course I’m sure I’m not the first parent to have written down the choice words their children have spoken.

But I am one of the new generation to benefit from having the smartphone to aid the enterprise.

In an age when parents are obsessively filming, photographing and sharing to social media, I think it’s worth remembering the power of this simpler, in some ways more intimate, method.

It’s something I came to appreciate even more as I witnessed my children learning to read and write themselves: the joy of being absorbed in a book you are devouring, the creative possibilities opened up by writing.

Both are so much more fulfilling than passively consuming an endless stream of games and videos on a mobile device.

Future-proof

Recording childhood through a digital record of words also carries some practical benefits. It avoids the nightmare of trying to sync videos from your phone and then organise and archive them in a safe place.

And it is also more future-proof, because you have to wonder whether, in decades’ time, the video file formats you used will still be readable.

Today my children have both reached five years old. The endearing, hilarious, uninhibited, sweet, surreal words still flow – although now they are more of a manageable trickle.

My book stands at 135 pages long.

When I look back on the notes, the power of these words is already greater than I could have imagined when I started the project.

To reread them stirs memories, recalling sights and sounds from the moment they were spoken, in a very vivid way.

Perhaps one day when I am frail in a care home this most precious book can be read to me.

Perhaps by my own children, if they visit.

And they will know that I was there and I heard every word.

And that I cared enough to record it.

Millions using 123456 as password, security study finds

Jurgen Klopp and Jordan Henderson
Liverpool FC topped the list of Premier League club names used as passwords

Millions of people are using easy-to-guess passwords on sensitive accounts, suggests a study.

The analysis by the UK’s National Cyber Security Centre (NCSC) found 123456 was the most widely-used password on breached accounts.

The study helped to uncover the gaps in cyber-knowledge that could leave people in danger of being exploited.

The NCSC said people should string three random but memorable words together to use as a strong password.

Sensitive data

For its first cyber-survey, the NCSC analysed public databases of breached accounts to see which words, phrases and strings people used.

Top of the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while others in the top five included “qwerty”, “password” and 1111111.

The most common name to be used in passwords was Ashley, followed by Michael, Daniel, Jessica and Charlie.

When it comes to Premier League football teams in guessable passwords, Liverpool are champions and Chelsea are second. Blink-182 topped the charts of music acts.

People who use well-known words or names for a password put themselves people at risk of being hacked, said Dr Ian Levy, technical director of the NCSC.

“Nobody should protect sensitive data with something that can be guessed, like their first name, local football team or favourite band,” he said.

Hard to guess

The NCSC study also quizzed people about their security habits and fears.

It found that 42% expected to lose money to online fraud and only 15% said they felt confident that they knew enough to protect themselves online.

It found that fewer than half of those questioned used a separate, hard-to-guess password for their main email account.

Security expert Troy Hunt, who maintains a database of hacked account data, said picking a good password was the “single biggest control” people had over their online security.

“We typically haven’t done a very good job of that either as individuals or as the organisations asking us to register with them,” he said.

Letting people know which passwords were widely used should drive users to make better choices, he said.

The survey was published ahead of the NCSC’s Cyber UK conference that will be held in Glasgow from 24-25 April.

Facebook bans UK far right groups and leaders

Facebook has imposed a ban on a dozen far-right individuals and organisations that it says “spread hate”.

The ban includes the British National Party and Nick Griffin, the English Defence League and the National Front.

The list also includes Britain First, which was already banned, but this latest action will prohibit support for it on any of the US firm’s services.

It said it had taken the action because those involved had proclaimed a “violent or hateful mission”.

“Individuals and organisations who spread hate, or attack or call for the exclusion of others on the basis of who they are, have no place on Facebook,” the social network added in a statement.

BNP Facebook page
The pages of some organisations named were still present on Facebook before the announcement

The ban includes:

  • The British National Party and its ex-leader Nick Griffin
  • Britain First, its leader Paul Golding and former deputy leader Jayda Fransen
  • English Defence League and its founding member Paul Ray
  • Knights Templar International and its promoter Jim Dowson
  • National Front and its leader Tony Martin
  • Jack Renshaw, a neo-Nazi who plotted to murder a Labour MP

A spokesman for Facebook clarified what would now be done to the pages the groups and individuals had run on its site. All those named would be prevented from having a presence on any Facebook service.

In addition, praise and support for the groups or named individuals would no longer be allowed.

EDL Facebook page

This action, he said, went further than the restrictions placed on Britain First last year when its official pages were removed for breaking the site’s community standards.

The latest move comes soon after Facebook said it would block

 “praise, support and representation of white nationalism and separatism” on its main app and Instagram.

Some controversial figures, such as Tommy Robinson, are already subject to bans on the social network.

Facebook scraped email contacts of 1.5 million users

Facebook “unintentionally” uploaded the email contacts of more than 1.5 million users without asking permission to do so, the social network has admitted.

The data harvesting happened via a system used to verify the identity of new members,

Facebook asked new users to supply the password for their email account, and took a copy of their contacts.

Facebook said it had now changed the way it handled new users to stop contacts being uploaded.

Data losses

All those users whose contacts were taken would be notified and all the contacts it had grabbed without consent would be deleted, it said.

The information grabbed is believed to have been used by Facebook to help map social and personal connections between users.

Anyone who, like me, joined Facebook a decade or more ago, probably clicked “yes” when invited to upload all of their contacts.

It seemed a good way of making the network more useful and, after all, what could be the harm? But after the various data scandals shattered trust in Facebook, we’ve become far more cautious.

We’ve woken up to the harms that could come from handing over that precious information about our social connections – for journalists it could mean revealing their contacts, for whistleblowers their dealings with regulators, for just about anyone their contacts with people they might not want their partners to know about.

Now we know that Facebook somehow scraped up the email contacts of 1.5 million people over a three year period without their agreement. Now every time the social network suggests “people you may know”, we will wonder “How do you know that I may know them?”

To many, the idea that they should trust Facebook with their data seems more old-fashioned by the day.

Presentational grey line

Contacts started being taken without consent in May 2016, the company told Business Insider, which broke the story.

Before this date, new users were asked if they wanted to verify their identity via their email account. They were also asked if they wanted to upload their address book voluntarily.

This option and the text specifying that contacts were being grabbed was changed in May 2016 but the underlying code that actually scraped contacts was left intact, said Facebook.

Ireland’s Data Protection Commissioner, which oversees Facebook in Europe, is engaged with the firm to understand what happened and its consequences.

Rep Alexandria Ocasio Cortez
Rep Ocasio Cortez said social media was a ‘health risk’

The email contacts case is the latest in a long series in which Facebook has mishandled the data of some of its billions of users.

In late March, Facebook found that the passwords of about 600 million users were stored internally  in plain text for months.

The ongoing breaches and other criticisms of Facebook are also prompting some high-profile users to bow out. The latest is Democrat Representative Alexandria Ocasio-Cortez who said she had “quit” the social network.

In an interview with a Yahoo News podcast she said: “I personally gave up Facebook, which was kind of a big deal because I started my campaign on Facebook.”

She added that social media posed a “public health risk”.

UK to introduce porn age-checks in July

An age-check scheme designed to stop under-18s viewing pornographic websites will come into force on 15 July.

From that date, affected sites will have to verify the age of UK visitors.

If they fail to comply they will face being blocked by internet service providers.

But critics say teens may find it relatively easy to bypass the restriction or could simply turn to porn-hosting platforms not covered by the law.

Twitter, Reddit and image-sharing community Imgur, for example, will not be required to administer the scheme because they fall under an exception where more than a third of a site or app’s content must be pornographic to qualify.

Likewise, any platform that hosts pornography but does not do so on a commercial basis – meaning it does not charge a fee or make money from adverts or other activity – will not be affected.

Furthermore, it will remain legal to use virtual private networks (VPNs), which can make it seem like a UK-based computer is located elsewhere, to evade the age checks.

The authorities have, however, acknowledged that age-verification is “not a silver bullet” solution, but rather a means to make it less likely that children stumble across unsuitable material online.

“The introduction of mandatory age-verification is a world-first, and we’ve taken the time to balance privacy concerns with the need to protect children from inappropriate content,” said the Minister for Digital Margot James.

“We want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.”

Past moves to police porn in the UK

Call to action

It had originally been proposed that pornographic services that refused to carry out age checks could be fined up to £250,000. However, this power will not be enforced because ministers believe the threat to block defiant sites will be sufficient and that trying to chase overseas-based entities for payment would have been difficult.

However, the government has said that other measures could follow.

“We know that pornography is available on some social media platforms and we expect those platforms to do a lot more to create a safer environment for children,” a spokesman for the Department of Digital Culture, Media and Sport (DCMS) told the BBC.

“If we do not see action then we do not rule out legislating in the future to force companies to take responsibility for protecting vulnerable users from the potentially harmful content that they host.”

The age checks were originally proposed by the now defunct regulator Atvod in 2014and were enacted into law as part of the the Digital Economy Act 2017. But their rollout had been repeatedly delayed.

UK-hosted pornographic video services already have to verify visitors’ ages, as do online gambling platforms.

‘Porn passes’

The British Board of Film Classification (BBFC) – which gives movies their UK age certificates – will be responsible for regulating the effort. It will instruct internet providers which sites and apps to block for non-compliance. In addition, it can call on payment service providers to pull support, and ask search engines and advertisers to shun an offending business.

The pornographic platforms themselves will have freedom to choose how to verify UK visitors’ ages.

But the BBFC has said that it will award solutions that adopt “robust” data-protection standards with a certificate, allowing them to display a green AV (age verification) symbol on their marketing materials to help consumers make an informed choice.

One digital rights campaign group questioned the sense of this scheme being voluntary.

“Having some age verification that is good and other systems that are bad is unfair and a scammer’s paradise – of the government’s own making,” said Jim Killock from the Open Rights Group.

“Data leaks could be disastrous. And they will be the government’s own fault.”

Mindgeek, one of the adult industry’s biggest players, has developed an online system of its own called AgeID, which it hopes will be widely adopted. It involves adults having to upload scans of their passports or driving licences, which are then verified by a third-party.

It has said that all the information will be encrypted and that the AgeID system will not keep track of how each users’ accounts are used.

AgeID
Mindgeek intends to launch its AgeID system soon in the UK

High street stores and newsagents will also sell separate age-verification cards to adults after carrying out face-to-face checks, according to the government.

Dubbed “porn passes” by the media, the idea is that users would type in a code imprinted on the cards into pornographic websites to gain access to their content.

The BBFC has said it will also create an online form for members of the public to flag non-compliant sites once the new regulations come into effect.

“We want to make sure that when these new rules are implemented they are as effective as possible,” commented the National Society for the Prevention of Cruelty to Children (NSPCC).

“To accomplish this, it is crucial the rules keep pace with the different ways that children are exposed to porn online.”

The age checks form part of a wider effort by the UK’s authorities to make the internet safer to use for young people.

Most recently, DCMS proposed the creation of a new regulator to tackle apps that contain content promoting self-harm and suicide, among other problems.

In addition, the Information Commissioner’s Office has proposed services stop using tools that encourage under-18s to share more personal data about themselves than they would do otherwise.

XXX

The idea of the government keeping a database of verified porn viewers had sounded like a privacy and ethical nightmare.

Luckily it has dodged that bullet. While ministers have ordered porn sites to age-verify users, they have not told them how they must do so.

That means different sites will have different systems

Those “porn passes” that your friendly local newsagent may soon dish out are a theoretical solution, but there is no obligation for any porn site to accept them.

So, you may potentially have to verify yourself several times for several porn sites.

Despite the introduction of a new kitemark-like badge to identify cyber-security conscious systems, there’s still a concern that some will suffer data breaches causing people’s adult interests to be exposed.

Article 13: UK helps push through new EU copyright rules

A revamp of the EU’s copyright rules has passed its final hurdle and will now come into law.

The rules include a section known as Article 13.

It says that if users upload infringing content to a service, the tech firm involved must either make a “best effort” to get permission from the rights holders or quickly remove it.

The UK was among 19 nations that supported the law in its European Council vote.

But Poland was one of those that objected on the grounds that it could pave the way to internet censorship.

EU sources say that five other countries also opposed the rules – Italy, Finland, Sweden, Luxembourg and the Netherlands – while Belgium, Estonia and Slovenia abstained.

Google had led lobbying efforts against the law’s introduction.

At one point it had featured pop-up notices on its YouTube video-streaming service warning that the effort could have “unintended consequences”, including the blocking of some of its clips to EU-based members.

In particular, there was concern that memes featuring clips from TV shows and films could no longer be shared. However, tweaks to the law subsequently made an exception for content used for the “purposes of quotation, criticism, review, caricature, parody and pastiche”.

Even so, there is still a concern that smaller sites will struggle to track down and pay copyright holders or to develop content filters that automatically block suspect material.

Another controversial rule – which says that search engines and social media providers will have to pay news publishers to feature snippets of their content – also remains.

Wikipedia blacked out four of its European sites in protest last month. It said the rules would make information harder to find online and thus make it harder for its volunteers to source information.

But European media industry leaders have welcomed the effort.

“Publishers of all sizes, and other creators, will now have the right to set terms and conditions for others to reuse their content commercially, as is only fair and appropriate,” commented Xavier Bouckaert, president of the European Magazine Media Association.

Helen Smith, executive chair of the Independent Music Companies Association, added: “It was a long road and we would like to thank everyone who contributed to the discussion. As a result, we now have a balanced text that sets a precedent for the rest of the world to follow, by putting citizens and creators at the heart of the reform and introducing clear rules for online platforms.”

The EU’s member states now have two years to adopt the rules into their national laws.

Facebook, Instagram and WhatsApp suffer outages

Social networks Facebook and Instagram, as well as messaging service WhatsApp, were unavailable on Sunday for more than three hours, users said.

The website Down Detector reported that thousands of people globally had complained about the Facebook-owned trio being down from 11.30 BST onwards.

Facebook users were presented with the message: “Something went wrong.”

At 14:50, the site said it had resolved the issue after some users “experienced trouble connecting” to the apps.

A spokesman for the company added: “We’re sorry for any inconvenience.”

Facebook did not comment on the cause of the problem, or say how many users had been affected.

In March, Facebook experienced one of its longest ever outages, with some users around the globe unable to access its site, as well as Instagram and WhatsApp, for more than 24 hours.

The site takes snapshots of sites to log how the web changes

Internet Archive denies hosting ‘terrorist’ content

The Internet Archive has been hit with 550 “false” demands to remove “terrorist propaganda” from its servers in less than a week.

The demands came via the Europol net monitoring unit and gave the site only one hour to comply.

The Internet Archive said the demands wrongly accused it of hosting terror-related material.

The website said the requests set a poor precedent ahead of new European rules governing removal of content.

If the Archive does not comply with the notices, it risks its site getting added to lists which ISPs are required to block.

Automatic removal

The Internet Archive, which uses the archive.org web address, is a non-profit organisation that lets people save and visit pages that might otherwise have been lost from the net.

In a blog, the website’s Chris Butler said that it had received notices identifying hundreds of web addresses stored on archive.org as leading people to banned material.

However, Mr Butler said, the reports were wrong about the content they pointed to, or were too broad for the organisation to comply with.

Some of the requests referred to material that had “high scholarly and research value” and were not produced by terror groups, he said.

Others called for the delisting of massively popular links that led people to “millions” of items.

Article 13

As well as listing vast amounts of non-contentious data, Mr Butler said, the demands to remove material were issued during the night when the Archive was unstaffed. This made it impossible to react within the one-hour window demanded by the notices, he said.

“It is not possible for us to process these reports using human review within a very limited timeframe like one hour,” he said.

He asked: “Are we to simply take what’s reported as ‘terrorism’ at face value and risk the automatic removal of things like the primary collection page for all books on archive.org?”

Initially the website believed that the notices came from a unit within the Europol European policing group, known as the Internet Referral Unit (IRU). It is tasked with seeking out terror-related materials and making net firms remove them.

However, Europol said the requests actually came from the French IRU which routed its requests through Europol.

The French IRU has not yet responded to a BBC request for comment on why it issued so many reports to the site.

Mr Butler said the Archive had not complied with the requests and was still receiving lots of takedown notices from the French IRU.

He said the Archive’s experience did not bode well for impending European rules governing the use of copyrighted material.

The Article 13 provision of European laws asks sites to get content checked before it is uploaded.

The surveillance system was built to monitor Lahore's streets following a series of terrorist attacks

Huawei wi-fi modules were pulled from Pakistan CCTV system

Huawei removed wi-fi transmitting cards from a Pakistan-based surveillance system’s CCTV cabinets after they were discovered by the project’s staff.

Punjab Safe City Authority (PSCA) told BBC Panorama it had told the firm to remove the modules in 2017 “due to [a] potential of misuse”.

The authority said that the Chinese firm had previously made mention of the cards in its bidding documents.

But a source involved in the project suggested the reference was obscure.

A spokesman for Huawei said there had been a “misunderstanding”. He added that the cards had been installed to provide diagnostic information, but said he was unable to discuss the matter further.

The PSCA confirmed that the explanation it had been given was that wi-fi connectivity could have made it easier for engineers to troubleshoot problems when they stood close to the cabinets, without having to open them up.

Two people involved in Lahore’s project helped bring the matter to the BBC’s attention and have asked to remain anonymous. One said that Huawei had never provided an app to make use of the wi-fi link, and added that the cabinets could already be managed remotely via the surveillance system’s main network.

CCTV cabinet
It was suggested that a wi-fi link could have helped engineers troubleshoot problems without having to climb up and open the cabinets

A UK-based cyber-security expert said that it was not uncommon for equipment sellers to install extra gear to let them offer additional services at a later date.

But he added that the affair highlighted the benefit of oversight because if the authority had remained unaware of the cards’ existence, it could not have taken steps to manage any potential risk they posed.

“As soon as you give someone another method of remote connectivity you give them a method to attack it,” commented Alan Woodward.

“If you put a wi-fi card in then you’re potentially giving someone some other form of remote access to it. You might say it’s done for one purpose, but as soon as you do that it’s got the potential to be misused.”

There is no evidence that the cards created a vulnerability, and one of the sources involved confirmed that there had not been an opportunity to test if they could be exploited before the kit was removed.

‘Prompt response’

Lahore’s Safe City scheme was first announced in 2016 following a series of terrorist bombings.

It provides a vast surveillance network of cameras and other sensors, and a brand new communications system for the city’s emergency services.

As part of the system, Huawei installed 1,800 CCTV cabinets, within which it placed the wi-fi modules behind other equipment.

CCTV cabinet
The cards were placed among other equipment in the cabinets

The PSCA’s chief operating officer told the BBC that Huawei had been “prompt” in its response to a request to remove them and had fully “complied with our directions”.

“It is always [the] choice of the parties in a contract to finalise the technical details and modules as per their requirements and local conditions,” added Akbar Nasir Khan.

“PSCA denies that there are any threats to the security of the project [and the] system was continuously checked by our consultants, including reputed firms from [the] UK.”

Local concerns have been raised over the Safe City scheme after reports that images had been leaked and circulated via social media earlier this year showing couples travelling together in vehicles.

But there is no suggestion that this was related to Huawei’s involvement, and in any case the wi-fi modules would have been removed by this point. The PSCA has also denied anyone from its office had been involved.

Websites to be fined over ‘online harms’ under new proposals

Internet sites could be fined or blocked if they fail to tackle “online harms” such as terrorist propaganda and child abuse, under government plans.

The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a “code of practice” for tech companies.

Senior managers could be held liable for breaches, with a possible levy on the industry to fund the regulator.

But critics say the plans threaten freedom of speech.

The Online Harms White Paper is a joint proposal from the DCMS and the Home Office. A public consultation on the plans will run for 12 weeks.

The paper suggests:

  • establishing an independent regulator that can write a “code of practice” for social networks and internet companies
  • giving the regulator enforcement powers including the ability to fine companies that break the rules
  • considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules

Outlining the proposals, Digital, Culture, Media and Sport Secretary Jeremy Wright said: “The era of self-regulation for online companies is over.

“Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”

Discussing financial penalties on BBC Breakfast, he said: “If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company’s turnover… we think we should be looking at something comparable here.”

What are ‘online harms’?

The plans cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods.

But it also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation.

It says social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017.

After she died her family found distressing material about depression and suicide on her Instagram account. Molly’s father holds the social media giant partly responsible for her death.

After Molly Russell took her own life, her family discovered distressing material about suicide on her Instagram account

Home Secretary Sajid Javid said tech giants and social media companies had a moral duty “to protect the young people they profit from”.

“Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

What do the proposals say?

The plans call for an independent regulator to hold internet companies to account.

It would be funded by the tech industry. The government has not decided whether a new body will be established, or an existing one handed new powers.

The regulator will define a “code of best practice” that social networks and internet companies must adhere to.

As well as Facebook, Twitter and Google, the rules would apply to messaging services such as Snapchat and cloud storage services.

The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules.

The government says it is also considering fines for individual company executives and making search engines remove links to offending websites.

Ministers “envisage” that fines and warning notices to companies will be included in an eventual bill.

They are also consulting over blocking harmful websites or stopping them from being listed by search engines.

On the face of it, this is a tough new regime – and ministers have acted upon the demands of charities like the NSPCC which want what they regard as the “Wild West Web” to be tamed.

But a closer look reveals all sorts of issues yet to be settled.

Will a whole new organisation be given the huge job of regulating the internet? Or will the job be handed to the media regulator Ofcom?

What sort of sanctions will be available to the regulator? And will they apply equally to giant social networks and to small organisations such as parents’ message boards?

Most tricky of all is how the regulator is going to rule on material that is not illegal but may still be considered harmful.

Take this example. Misinformation is listed as a potential harm, and Health Secretary Matt Hancock has talked about the damaging effects anti-vaccination campaigners have had.

So will the regulator tell companies that their duty of care means they must remove such material?

The government now plans to consult on its proposals. It may yet find that its twin aims of making the UK both the safest place in the world online and the best to start a digital business are mutually incompatible.

Presentational grey line

What will the ‘code of practice’ contain?

Child surfing internet on a computer tablet in his bedroom

The white paper offers some suggestions that could be included in the code of best practice.

It suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources.

But the regulator will be allowed to define the code by itself.

The white paper also says social media companies should produce annual reports revealing how much harmful content has been found on their platforms.

The children’s charity NSPCC has been urging new regulation since 2017 and has repeatedly called for a legal duty of care to be placed on social networks.

A spokeswoman said: “Time’s up for the social networks. They’ve failed to police themselves and our children have paid the price.”

How have the social networks reacted?

Apps on a smartphone

Rebecca Stimson, Facebook‘s head of UK policy, said in a statement: “New regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.

“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech.”

Twitter‘s head of UK public policy Katy Minshall said in a statement: “We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”

TechUK, an umbrella group representing the UK’s technology industry, said the government must be “clear about how trade-offs are balanced between harm prevention and fundamental rights”.

Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further.

He said: “The government should be ashamed of themselves for leading the western world in internet censorship.

“The proposals are a historic attack on freedom of speech and the free press.

“At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home.”

And freedom of speech campaigners Article 19 warned that the government “must not create an environment that encourages the censorship of legitimate expression”.

A spokesman said it opposed any duty of care being imposed on internet platforms.

They said that would “inevitably require them to proactively monitor their networks and take a restrictive approach to content removal”.

“Such actions could violate individuals’ rights to freedom of expression and privacy,” they added.

Some 20,000 people attended a memorial service on Friday for the 50 victims who were killed while praying

Facebook to consider live video restrictions after NZ attacks

Facebook has promised to explore restrictions on live-streaming, two weeks after it was used during gun attacks on two mosques in New Zealand.

Chief operating officer Sheryl Sandberg said the social media giant agreed with calls it “must do more”.

Fifty people were killed in the Christchurch shootings, and the original video of the attack was viewed 4,000 times before it was removed.

Meanwhile, New Zealand is to review “inadequate” laws on hate speech.

Justice Minister Andrew Little said the current laws did not tackle “the evil and hateful things that we’re seeing online”, and that the government and the Human Rights Commission would work to bring forward proposals by the end of the year.

More than 20,000 people attended a memorial service in Christchurch on Friday to honour the 50 victims of the 15 March shooting.

Of the dozens injured, 21 people remain in hospital, three of them in intensive care.

What does Facebook say?

“All of us at Facebook stand with the victims, their families, the Muslim community, and all of New Zealand,” Ms Sandberg wrote in her letter to the New Zealand Herald.

“Many of you have also rightly questioned how online platforms such as Facebook were used to circulate horrific videos of the attack… We have heard feedback that we must do more – and we agree.”

Facebook’s Sheryl Sandberg outlined the company’s actions in a letter to a New Zealand newspaper

Ms Sandberg said: “First, we are exploring restrictions on who can go Live depending on factors such as prior Community Standard violations.”

Facebook said fewer than 200 people had watched the 17-minute video of the Christchurch shootings while it was live, and the first user report of the video came 12 minutes after it ended.

Social media sites struggled to contain the attack video, which was copied onto the alt-right file-sharing site 8chan and then spawned 1.5 million copies.

The chief operating officer did not announce any policy changes, but outlined how the social network would strengthen the rules for using Facebook Live and take greater steps to address hate on its platforms.

The company has said it will block “praise, support and representation of white nationalism and separatism” on Facebook and Instagram from next week.

Facebook has been heavily criticised within New Zealand in the wake of the attack over its lack of response to officials.

The country’s privacy commissioner wrote “your silence is an insult to our grief” to Facebook executives last week, according to the Herald.

Facebook has said it is working with the New Zealand Police on its investigation.

Facebook to consider live video restrictions after NZ attacks
Facebook says it’s improving its methods for removing hate speech posted by users
Lt Col Shishkina worked for the interior ministry investigating fraud and drug trafficking

Russia police probe ‘dark net’ murder case

Police in Russia are investigating what could be the world’s first documented case of a contract killing ordered via the so-called dark net.

High-ranking police investigator Yevgeniya Shishkina was shot dead outside her home near Moscow in October 2018.

Five months later, local police announced they had arrested two people in connection with the killing – a 19-year-old medical student called Abdulaziz Abdulazizov, and a 17-year-old schoolboy, who cannot be named for legal reasons. Both are from St Petersburg.

Drug sales

Preliminary details of the case, which has shocked Russian investigators, have been shared unofficially with BBC Russian by sources close to the investigation. A court case is expected later this year, in which the two students face prosecution.

The documents allege that the pair were paid one million rubles (£12,000) to carry out the murder. According to the police reports, the order was placed by the owner of a drug-dealing site on an illegal online trading platform on part of what is known as the dark net.

Igor Bederov, who runs the St Petersburg-based Internet Search Company, told BBC Russian the site is primarily used for buying and selling drugs. It is one of dozens of sites that make up Russia’s largest illegal drug-trading platform, Hydra.

The dark net refers to parts of the internet that are not open to public view. It contains illegal trading sites that sell drugs, hacking software, counterfeit money and, as appears to be the case here, contract killing services.

To date there are no proven cases of someone being killed by an assassin hired via the dark net.

Net cables
Dark net markets often require specialised software to access them

The documents seen by BBC Russian suggest the man who allegedly placed the order for the murder, who goes by the pseudonym Miguel Morales, was being investigated by Lt Col Shishkina. (Miguel Morales is the name of a former Mexican drug lord and leader of the criminal organisation known as Los Zetas.) His real name has not been revealed.

In August 2018, the 17-year-old appears to have posted on the Hydra platform, looking for work. A month later, according to the police documents, he received a request for a private messaging address.

Communications then appear to have been transferred to private messages, where details of the contract killing were allegedly discussed.

The police documents suggest the 17-year-old split the money he was offered, allegedly keeping 400,000 rubles for himself and giving 600,000 to Mr Abdulazizov to carry out the murder.

The two had met playing basketball together in St Petersburg shortly before Mr Abdulazizov began studying at the city’s Medico-Social Institute, it is claimed.

The documents allege that Mr Abdulazizov travelled around 435 miles (700km) from St Petersburg to Moscow using a car-sharing service.

Rap concert

It is claimed that he retrieved both the murder weapon – a modified pistol – and ammunition from a “drop box” in a wooded area in a Moscow suburb, and then went on to book a hotel room in Krasnogorsk, close to where Lt Col Shishkina lived.

It is also alleged that the 17-year-old sent Mr Abdulazizov the name and geolocation of the victim via the Telegram messenger app. Mr Abdulazizov studied videos on YouTube in his hotel showing him how to load the weapon, the police reports claim.

On the day of the murder, Mr Abdulazizov allegedly told investigators he approached Lt Col Shishkina outside her home at 6.30 in the morning. He said at first he felt light-headed.

But he then pulled himself together, he said, ran around the back of the building to approach Lt Col Shishkina from behind and fired twice at close range, it is claimed. Lt Col Shishkina died minutes later in her husband’s arms.

The aftermath of Lt Col Shishkina's murder
The killing in broad daylight prompted a major police investigation

It appears police managed to track down Mr Abdulazizov by searching through all the journeys made via taxi apps in the area that morning. Mr Abdulazizov allegedly travelled to Lt Col Shishkina’s house by ordering a taxi using a mobile phone app.

A few hours after the murder, Mr Abdulazizov attended a concert, where rappers Killstation and Brennan Savage were performing: originally he had told investigators the reason he had travelled to Moscow was to go to the concert.

BBC Russian has studied photographs from the concert, which show a man in the audience who appears to be Mr Abdulazizov.

Friends of Mr Abdulazizov say they are in shock that he has been arrested for murder.

“He was rather vulnerable,” a friend, who asked not to be named, told BBC Russian. “He was also very kind. He never offended anyone. I have never heard that he got into any sort of fight.”

Mr Abdulazizov’s girlfriend, who also did not want to be named, told BBC Russian she could not believe he had been arrested for murder.

“The day he was arrested we were supposed to go to the cinema. He asked me what he could buy me as a gift,” she said. “There was no indication he was in any sort of trouble.”

Lt Col Shishkina had worked as a police investigator since 1991. According to media reports, she investigated cases involving drug trafficking, economic crimes and fraud.

Investigators say she had received several threats in the months leading up to her murder and had reported them to her superiors. But she had repeatedly refused offers of state protection, they said.

Nigeria Ports Mess Traps 50,000 Tons of Cashew, Threatens Sector

  • Load from 2018 harvest is worth $300 million: cashew exporters
  • Congestion and inefficiency at Lagos’ ports affecting exports

Gridlock and inefficiency at the ports of Nigeria’s commercial hub, Lagos, delayed shipment of 50,000 tons of cashew nuts valued at $300 million and is threatening this year’s output as traders are cash-strapped.

The kidney-shaped fruits from last year’s harvest should have been exported by January, according to Tola Fasheru, president of Nigeria Cashew Exporters Association. Instead, they are still in containers on trucks waiting to enter the ports or on wharves, he said.

Roads to Lagos ports are badly congested, with hundreds of lorries queuing to enter the premises and either deliver or pick goods. In addition, inadequate capacity and infrastructure, stifling red tape and corruption are hampering export processes, according to Fasheru.

Bribes, Beatings and Gridlock at Ports Choke Nigeria's Economy
Container trucks sit stationary in heavy traffic on the approach to Lagos Port on Oct. 12, 2018. Photographer:

“There is a palpable lack of synergy among the port operators and this is affecting the business of our members,” he said Thursday by phone from Lagos.

Some members of the cashew association have defaulted on contracts to the extent that foreign buyers are now walking away from them. “They are no longer willing to give us fresh contracts,” said the group’s president.

No Money

The delay is likely to affect the output target of 260,000 tons for the current season, which started in February and will end in July.

“Not one single cashew exporter is in the field now as he is owing on contracts and as a result has no money to operate with,” said Fasheru.

Africa’s sixth largest cashew producer plans to raise its annual production to 500,000 tons by 2023, according to a five-year strategic plan released in 2018 by the National Cashew Association of Nigeria.

The country is the continent’s biggest oil producer and President Muhammadu Buhari’s administration is seeking to reduce dependency on crude and diversify the economy, which contracted in 2016 after oil prices and output crashed. Agriculture is one of the key sectors the government has been trying to boost.

Huawei's 5G antennas and masts are already being tested in the UK

‘Long-term security risks’ from Huawei

The Chinese company Huawei has been strongly criticised in a report by the body overseeing the security of its products in UK telecoms.

The report, issued by the National Cyber Security Centre, which is part of GCHQ, says it can provide “only limited assurance that the long-term security risks can be managed in the Huawei equipment currently deployed in the UK”.

The report reflects what are said to be deep frustrations at the failure of the company to address previously identified problems.

Huawei supplies telecoms for telecoms companies operating in the UK and this report comes ahead of a decision by the UK over whether to allow the company to build next generation 5G networks.

Poor practices

The US has been campaigning for it to be excluded on the basis the company poses a national security risk.

There is no allegation in the latest report that the company is deliberately introducing backdoors or working to carry out any kind of espionage on behalf of the Chinese state.

Rather, the accusation is that poor practices by the company create vulnerabilities that in turn pose security risks.

The report describes “significant technical issues in Huawei’s engineering processes”.

It also says Huawei’s approach to software development brings “significantly increased risk to UK operators”.

Officials say the rigorous system of oversight means those risks can be mitigated and managed.

But the report also warns that the current arrangement “can only provide limited assurance that all risks to UK national security from Huawei’s involvement in the UK’s critical networks can be sufficiently mitigated long-term”.

Huawei’s kit is often cheaper than that of rivals but with that come concerns that the business model driving its fast growth can lead to sloppiness in its work.

And because the company offers different products to different customers, it has been hard for security officials to be able to confirm that the equipment is all secured to the same standard.

Since 2010, after Huawei partnered first with BT and then other telecoms providers to supply equipment in the UK’s telecoms infrastructure, the Huawei Cyber Security Evaluation Centre (HCSEC), known as “the cell”, has been examining the hardware and software deployed.

In 2014, a board, chaired by National Cyber Security Centre head Ciaran Martin, was set up to oversee its work.

Other government representatives as well as individuals from Huawei and companies that use Huawei equipment also sit on the oversight board.

Concerns were raised in last year’s annual report but this year its report is highly critical of the failure of the company to address these.

Huawei has said it will invest significant sums in dealing with the problems in the next three to five years but it is understood that so far officials have not seen what they consider to be a credible plan to do so.

“No material progress has been made by Huawei in the remediation of the issues reported last year,” the report says.

This raises concerns for the future, according to the oversight board.

“It will be difficult to appropriately risk manage future products in the context of UK deployments, until Huawei’s software engineering and cyber-security processes are remediated,” it says.

“The oversight board currently had not seen anything to give it confidence in Huawei’s ability to bring about change via its transformation programme.”

Huawei
Huawei’s kit is often cheaper than that of rivals

The report stresses that the decision over Huawei’s role in 5G will come after a wider review by the Department for Digital, Culture, Media and Sport (DCMS).

But its warnings raise serious questions as to whether a company whose work on existing systems has proved so problematic should be allowed to play a major role in building the next generation of systems on which significant parts of our daily life will eventually depend.

Long-term security risks’ from Huawei
The Huawei Cyber Security Evaluation Centre has been examining Huawei technology for several years

In response, a Huawei representative said it understood the concerns over its software engineering capability and took them “very seriously”.

They added:

  • the company’s board had resolved to invest $2bn to improve its capabilities and a high level plan had been developed
  • Huawei would continue to work with UK operators and the National Cyber Security Centre to meet their requirements
PETER MACDIARMID Web pioneer Sir Tim Berners-Lee has warned about the possible consequences of copyright changes

EU backs controversial copyright law

The European Parliament has backed controversial copyright laws critics say could change the nature of the net.

The new rules include holding technology companies responsible for material posted without proper copyright permission.

Many musicians and creators say the new rules will compensate artists fairly – but others say they will destroy user-generated content.

The Copyright Directive was backed by 348 MEPs, with 278 against.

The laws on copyright were last amended in 2001.

It has taken several revisions for the current legislation to reach its final form.

It is now up to member states to approve the decision. If they do, they will have two years to implement it once it is officially published.

The two clauses causing the most controversy are known as Article 11 and Article 13.

Article 11 states that search engines and news aggregate platforms should pay to use links from news websites.

Article 13 holds larger technology companies responsible for material posted without a copyright licence.

It means they would need to apply filters to content before it is uploaded.

The European Parliament said that memes – short video clips that go viral – would be “specifically excluded” from the Directive, although it was unclear how tech firms would be able to enforce that rule with a blanket filter.

‘Step forward’ or ‘massive blow’?

Robert Ashcroft, chief executive of PRS for Music, which collects royalties for music artists, welcomed the directive as “a massive step forward” for consumers and creatives.

“It’s about making sure that ordinary people can upload videos and music to platforms like YouTube without being held liable for copyright – that responsibility will henceforth be transferred to the platforms,” he said.

However the campaign group Open Knowledge International described it as “a massive blow” for the internet.

“We now risk the creation of a more closed society at the very time we should be using digital advances to build a more open world where knowledge creates power for the many, not the few,” said chief executive Catherine Stihler.

Noble aims’

Google said that while the latest version of the directive was improved, there remained “legal uncertainty”.

“The details matter and we look forward to working with policy-makers, publishers, creators and rights holders, as EU member states move to implement these new rules,” it said.

Kathy Berry, senior lawyer at Linklaters, said more detail was required about how Article 13 would be enforced.

“While Article 13 may have noble aims, in its current form it functions as little more than a set of ideals, with very little guidance on exactly which service providers will be caught by it or what steps will be sufficient to comply,” she said.

European Parliament Rapporteur Axel Voss said the legislation was designed to protect people’s livelihoods.

“This directive is an important step towards correcting a situation which has allowed a few companies to earn huge sums of money without properly remunerating the thousands of creatives and journalists whose work they depend on,” he said.

“It helps make the internet ready for the future, a space which benefits everyone, not only a powerful few.”

Apple unveils TV streaming platform and credit card

Apple has unveiled its new TV streaming platform, Apple TV+, at a star-studded event in California.

Jennifer Aniston, Steven Spielberg and Oprah Winfrey were among those who took to the stage at Apple’s headquarters to reveal their involvement in TV projects commissioned by the tech giant.

The platform will include shows from existing services like Hulu and HBO.

Apple also announced that it would be launching a credit card, gaming portal and enhanced news app.

Apple announces new services

The event was held in California and Apple Chief Executive Tim Cook was clear from the start that the announcements would be about new services, not new devices.

It is a change of direction for the 42-year-old company.

Steve Carell, Reese Witherspoon and Jennifer Anniston
Steve Carell, Reese Witherspoon and Jennifer Aniston

Apple TV

There had been much anticipation about Apple’s predicted foray into the TV streaming market, dominated by the likes of Amazon and Netflix.

The Apple TV+ app was unveiled by Steven Spielberg and will launch in the autumn.

Spielberg will himself be creating some material for the new platform, he said.

Other stars who took to the stage included Reese Witherspoon, Steve Carell, Jason Momoa, Alfre Woodard, comedian Kumail Nanjiani and Big Bird from Sesame Street.

The app will be made available on rival devices for the first time, coming to Samsung, LG, Sony and Vizio smart TVs as well as Amazon’s Firestick and Roku.

Oprah Winfrey
Oprah Winfrey spoke of the potential of a book club on Apple TV+.

The subscription fee was not announced, and notably absent from the launch line-up was Netflix, which had already ruled itself out of being part of the bundle.

“The test for Apple will be, can new content separate them out from their competitors and can they commission and deliver on fresh new content that can reach audiences in the same way that Stranger Things has for Netflix for example?” commented Dr Ed Braman, an expert in film and production at the University of York.

Apple Card

Apple credit card
The physical version of the card is made of titanium and does not have a card number or signature space on it.

The Apple Card credit card will launch in the US this summer.

There will be both an iPhone and physical version of the card, with a cashback incentive on every purchase.

The credit card will have no late fees, annual fees or international fees, said Apple Pay VP Jennifer Bailey.

It has been created with the help of Goldman Sachs and MasterCard.

News stand

The firm also revealed a news service, Apple News+, which will include more than 300 magazine titles including Marie Claire, Vogue, New Yorker, Esquire, National Geographic and Rolling Stone.

The LA Times and the Wall Street Journal will also be part of the platform, the firm said.

It added that it will not track what users read or allow advertisers to do so.

Apple News+ will cost $9.99 (£7.50) per month and is available immediately in the US and Canada. It will come to Europe later in the year.

Unlike TV+, the news platform will only be available on Apple devices.

Gaming

Apple Arcade
Apple Arcade will offer 100 games not available elsewhere.

A new games platform, Apple Arcade, will offer over 100 exclusive games from the app store which will all be playable offline, in contrast with Google’s recently announced streaming platform Stadia.

It will be rolled out across 150 countries in the autumn but no subscription prices were given.

in 2018 analyst firm IHS Markit valued the global gaming market on iOS, Apple’s operating system, at $33.5bn.

There is space within that market for a platform like Apple Arcade which is not financed by in-app purchases or advertising, said IHS director of games research Piers Harding-Rolls.

“Apple’s decision to move up the games value chain with a new, curated subscription service and to support the development of exclusive games for its Arcade platform is a significant escalation of the company’s commitment to the games market,” he said.

“Apple joins the other technology companies Microsoft, Facebook, Google, Amazon and others in investing directly in games content and services.”

Short presentational grey line

Analysis: Dave Lee, North America technology reporter, at the Steve Jobs Theater

Apple is making an aggressive push into several markets in which, thanks to sheer scale alone, it immediately becomes a massive player.

Its TV service has been long in the making, and Apple has amassed a roster of big stars, as expected.

A bigger test will be how creative those ideas will be – a lot of Netflix’s success has been about finding new talent, not throwing money at already famous names.

I also have reservations about how many boundaries Apple will be prepared to push with its creative endeavours: if it’s as controlling with its television as it is with its brand, it will create a catalogue bereft of risk-taking.

But TV is just a small part of what Apple is going for here. It wants (and needs) to turn its devices into the portal through which you do everything else – TV/film, gaming, reading the news… and you’d presume other things in the very near future.

The announcement of a credit card shows how far Apple is prepared to go to make sure life is experienced through your iPhone.

As Oprah put it on stage: “They’re in a billion pockets, y’all.”

Why bots probably aren’t gaming the ‘Cancel Brexit’ petition

Questions have been asked about whether a government petition calling for Brexit to be cancelled has been swamped by bots.

Bots are automated programmes which can carry out a command thousands of times.

The BBC spoke to three cyber-security experts about how likely it is that a number of the 3m signatures gathered so far are not genuine.

They all agreed that the petition’s email validation process would be a deterrent.

Each signatory has to supply a unique email address to which a verification link is sent before their signature can be accepted. UK-based signatories must also share a valid postcode.

While email addresses are easy enough to set up, doing that in real time at high volume is less straightforward.

Additionally, while it is possible to buy lists of email addresses stolen in various data breaches on the black market, the owner of the list would still need to access those email accounts and retrieve the validation email before being able to sign in the name of somebody else.

The email verification would be likely to deter bots said Lisa Forte, partner at the cyber-security firm Red-Goat.

“Any significant political decision such as this petition is highly likely to attract bots,” she told the BBC.

“This particular petition is now employing email verification before signing, meaning it is much harder and therefore much less likely bots are being employed.”

‘A bit of a pain’

Cyber-security expert Kevin Beaumont said that while it was possible that bots were involved, it would be “a bit of a pain” to build a sophisticated enough programme to cope with the email addresses.

“They would have to make a bot that signs up with unique email addresses, then clicks the unique link to sign,” he said.

The House of Commons declined to comment on its security checks but it did say the Government Digital Service uses “a number of techniques” to identify potentially fraudulent signatures and bot activity.

It is not possible to use the same email address more than once to sign the petition.

However, bot activity could still be used to slow down or crash the platform, meaning that people wanting to leave genuine signatures could be prevented from doing so.

This is known as a Distributed Denial of Service (DDoS) attack.

How secure is the petition platform?

“I’m not sure the system itself is that sophisticated – it fell over as soon as people started voting in large numbers,” said Prof Alan Woodward from Surrey University.

The UK government’s petition platform has crashed several times under the weight of traffic in recent days. The petition launched on 20 February, but has now gone viral.

“Is there some gaming going on? I wouldn’t be at all surprised,” he added.

“It’s a petition, it’s not a vote – it’s not meant to be as secure as an e-voting system.”

According to the rules of the site, anyone can submit a petition. If it gets 10,000 signatures it will receive a government response, and if it gets 100,000 it will be debated in parliament. Beyond that, the numbers don’t make a difference, he pointed out.

Is it Russia?

Former UKIP leader Nigel Farage suggested that “Russian collusion” was behind the unprecedented traffic towards the Brexit petition.

While Russia is notorious for seeking to meddle in the politics of the west, on this occasion there is a question mark over what its intentions would be, added Prof Woodward.

“All the evidence is that Russia was supporting the Leave campaign,” he said.

“So why would they suddenly be supporting Remain?”

While the petition data (which is currently not updating) reveals that signatures are coming in from all over the world  – including small numbers from Russia, China, Iran and one from North Korea where it is unlikely the page can be seen – the UK government said that any British resident or citizen can sign, wherever they are.

The BBC understands that fewer than 4% of signatures are coming from outside the UK at time of writing.

It is however not difficult to disguise or hide a location on the web.

Has it happened before?

In 2016, an earlier petition calling for a second EU Referendum attracted 3.6m signatures, but was hijacked by bots.

In January 2017 a petition calling for the end of “mass signings by bots” was rejected by the Petitions Committee on the grounds that it was unclear what was expected of the government.

Christchurch was put into lockdown as events unfolded

Christchurch shootings: Social media races to stop attack footage


A gunman opened fire in a mosque in Christchurch, New Zealand, killing 49 people and injuring 20 more. As he did so, he filmed the entire crime and live-streamed it directly to Facebook.

What ensued was an exhausting race for social media pages to take the footage down, as it was replicated seemingly endlessly and shared widely in the wake of the attack.

And through social media, it found its way onto the front pages of some of the world’s biggest news websites in the form of still images, gifs, and even the full video.

This series of events has, once again, shone a spotlight on how sites like Twitter, Facebook, YouTube and Reddit try – and fail – to address far-right extremism on their platforms.

As the video continued to spread, other members of the public put up their own posts pleading with people to stop sharing it.

One pointed out: “That is what the terrorist wanted.”

What was shared?

The video, which shows a first-person view of the killings, has been widely circulated.

  • About 10 to 20 minutes before the attack in New Zealand, someone posted on the /pol/section of 8chan, a message board popular with the alt-right. The post included links to the suspect’s Facebook page, where he stated he would be live-streaming and published a rambling and hate-filled document
  • That document was, as Bellingcat analyst Robert Evans points out , filled with “huge amounts of content, most of it ironic, low-quality trolling” and memes, in order to distract and confuse people
  • The suspect also referenced a meme in the actual video. Before opening fire he shouted “subscribe to PewDiePie”, a reference to a meme about keeping YouTube star PewDiePie as the most-subscribed-to channel on the platform . PewDiePie has been embroiled in a race row before so some have speculated that the attacker knew that mentioning him would provoke a reaction online. PewDiePie later said on Twitter he was “absolutely sickened having my name uttered by this person”
  • The attacks were live-streamed on Facebook and, despite the original being taken down, were quickly replicated and shared widely on other platforms, including YouTube and Twitter
  • People continue to report seeing the video, despite the sites acting pretty swiftly to remove the original and copies, and copies are still being uploaded to YouTube faster than it can remove them
  • Several Australian media outlets broadcast some of the footage, as did other major newspapers around the world
  • Ryan Mac, a BuzzFeed technology reporter, has created a timeline of where he has seen the video, including it being shared from a verified Twitter account with 694,000 followers. He claims it has been up for two hours

How have people reacted?

While huge numbers of people have been duplicating and sharing the footage online, many others responded with disgust – urging others not only not to share the footage, but not even to watch it.

Spreading the video, many said, was what the attacker had wanted people to do.

A lot of people were particularly angry at media outlets for publishing the footage.

Channel 4 News anchor Krishnan Guru-Murthy, for example, specifically named two British newspaper websites and accused them of hitting “a new low in clickbait”.

Buzzfeed reporter Mark Di Stefano also wrote that MailOnline had allowed readers to download the attacker’s 74-page “manifesto” from their news report . The website later removed the document, and released a statement saying it was “an error”.

Daily Mirror editor Lloyd Embley also tweeted that they had removed the footage, and that publishing it was “not in line with our policy relating to terrorist propaganda videos” .

How have social media companies responded?

All of the social media firms have sent heartfelt sympathy to the victims of the mass shootings, reiterating that they act quickly to remove inappropriate content.

Facebook said: “New Zealand Police alerted us to a video on Facebook shortly after the live-stream commenced and we removed both the shooter’s Facebook account and the video.

“We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues.”

And in a tweet, YouTube said “our hearts are broken”, adding it was “working vigilantly” to remove any violent footage.

In terms of what they have done historically to combat the threat of far-right extremists, the social media companies; approach has been more chequered.

Twitter acted to remove alt-right accounts in December 2017. Previously it has removed and then reinstated the account of Richard Spencer, an American white nationalist who popularised the term “alternative right”.

Facebook, which suspended Mr Spencer’s account in April 2018, admitted at the time that it was difficult to distinguish between hate speech and legitimate political speech.

This month, YouTube was accused of being either incompetent or irresponsible for its handling of a video promoting the banned Neo-Nazi group, National Action.

British MP Yvette Cooper said the video-streaming platform had repeatedly promised to block it, only for it to reappear on the service.

What needs to happen next?

Dr Ciaran Gillespie, a political scientist from Surrey University, thinks the problem goes far deeper than a video, shocking as that content has been.

“It is not just a question about broadcasting a massacre live. The social media platforms raced to close that down and there is not much they can do about it being shared because of the nature of the platform, but the bigger question is the stuff that goes before it,” he said.

Police talking to relatives
At least 49 people were killed in the shootings at two mosques in Christchurch

As a political researcher, he uses YouTube “a lot” and says that he is often recommended far-right content.

“There is oceans of this content on YouTube and there is no way of estimating how much. YouTube has dealt well with the threat posed by Islamic radicalisation, because this is seen as clearly not legitimate, but the same pressure does not exist to remove far-right content, even though it poses a similar threat.

“There will be more calls for YouTube to stop promoting racist and far-right channels and content.”

‘Legitimate controversy’

His views are echoed by Dr Bharath Ganesh, a researcher at the Oxford Internet Institute.

“Taking down the video is obviously the right thing to do, but social media sites have allowed far-right organisations a place for discussion and there has been no consistent or integrated approach to dealing with it.

“There has been a tendency to err on the side of freedom of speech, even when it is obvious that some people are spreading toxic and violent ideologies.”

Now social media companies need to “take the threat posed by these ideologies much more seriously”, he added.

“It may mean creating a special category for right-wing extremism, recognising that it has global reach and global networks.”

Neither under-estimate the magnitude of the task, especially as many of the exponents of far-right views are adept at, what Dr Gillespie calls, “legitimate controversy”.

“People will discuss the threat posed by Islam and acknowledge it is contentious but point out that it is legitimate to discuss,” he said.

These grey areas are going to be extremely difficult for the social media firms to tackle, they say, but after the tragedy unfolding in New Zealand, many believe they must try harder.

Recent Posts

HP computer stranded in space

Part of the HP space display at MWC

Two HP servers sent up to the International Space Station in August 2017 as an experiment have still not come back to Earth, three months after their intended return.

Together, they make up the Spaceborne Computer, which operates on the open-source Linux system and has supercomputer processing power.

They were sent up to see how durable they would be in space with minimum specialist treatment.

After 530 days, they are still working.

Their return flight was postponed indefinitely, after a Russian rocket fail in October 2018.

And HP senior content architect Adrian Kasbergen said they may return in June 2019 if there was space on a flight “but right now they haven’t got a ticket”.

HP is working with Nasa and Elon Musk’s Space X to be “computer-ready” for the first Mars flight, estimated to take place in about 2030.

Cooler air

Currently the 20-year-old ISS on-board machines return data to Earth for processing but it can take 30 minutes for the data to travel each way – and it is unlikely to be possible to send data “home” for processing from Mars, which is 34 million miles away.

The three original computers on board the ISS had cost $8bn each and taken 10 years to build, Mr Kasbergen told BBC News.

“Our servers cost thousands, rather than millions of dollars,” he added, speaking at the 2019 Mobile World Congress (MWC), in Barcelona, where HP is displaying a replica model of the ISS Destiny Module.

The Spaceborne Computer was currently embedded in the ceiling of the real thing, Mr Kasbergen said.

The servers had needed some bespoke modification – the air cooling system would not work in space.

And, Mr Kasbergen said, there had been unforeseen problems with their power supply as well as the solid-state drive that supports the main hard drive.

But the devices would need to be investigated back on Earth to find out what had gone wrong.