CyberWire Daily - No more Apple time-out for Facebook and Google. Inauthentic sites taken down. Fancy Bear paws at Washington, again. Malware-serving ads. Amplification DDoS. Data exposures in India.
Episode Date: February 1, 2019In today’s podcast, we hear that Apple has let Facebook and Google out of time-out. Russia decides it would like access to Apple data because, you know, its Russian law. Social networks take down la...rge numbers of inauthentic accounts. Fancy Bear is snuffling around Washington again, already, with some spoofed think-tank sites. Shape shifting campaign afflicts ads. China sees CoAPP DDoS attacks. An Aadhaar breach hits an Indian state as the SBI bank recovers from a data exposure incident. Johannes Ullrich from SANS and the ISC Stormcast Podcast on the effectiveness of blocklists. Guest is Daniel Faggella from Emerj Artificial Intelligence Research on the future of AI and security. For links to all of today's stories check our our CyberWire daily news brief: https://thecyberwire.com/issues/issues2019/February/CyberWire_2019_02_01.html Support our show Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash n2k code N2K at checkout. That's joindelete.me.com slash N2K, code N2K.
Apple lets Facebook and Google out of timeout.
Russia decides it would like access to Apple data because, you know, it's Russian law.
Social networks take down large numbers of inauthentic accounts.
Fancy Bear is snuffling around Washington again, already with some spoofed think tank sites.
A shape-shifting campaign afflicts ads.
China sees co-op DDoS attacks.
An Adhar breach hits an Indian state as the SBI bank recovers from a data exposure incident.
From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, February 1st, 2019.
Happy Friday, everybody.
Apple's timeout punishment of Facebook and Google was sharp, but soon over. Happy Friday, everybody. Google's employees can again access iOS versions of pre-launch test apps.
Google's ScreenWise meter and Facebook Research collected user data in ways Apple deemed violated its terms of use.
The magazine Foreign Policy suggests Russia envies Mountain View's access.
Moscow's Telecommunications Authority says it expects Apple to comply with a 2014 law requiring data collected on Russian citizens to be stored on Russian servers, where it must be decrypted on demand should the security service require it.
As much as they've struggled and continue to struggle with content moderation, social media platforms continue to have more success working against bots and people who are not whom they claim to be.
Facebook this week continued its purge of inauthentic accounts.
The social network has taken down more than 700 pages that were being directed from Iran,
amplifying Islamic Republic state media content and targeting audiences in the Middle East and South Asia.
Facebook stops short of calling it an Iranian government operation.
Patriotic activism is also possible.
Twitter has been active against information operations as well,
offering an account of 2018 election influence attempts emanating from Russia, Iran, and Venezuela.
The company also took down followbot services ManageFlitter, StatusBrew, and Crowdfire.
Twitter found all of these in violation of its automation rules.
Fancy Bear, Russia's GRU, seems to have hit a prominent Washington think tank.
Microsoft said Wednesday in a court filing that they'd taken down bogus sites spoofing the Center for Strategic and International Studies, or CSIS.
CSIS has long studied Russian matters, and fancy bears' interest in this particular think tank is unsurprising.
Bears know where the honey is.
Observers are throwing their hands in the air over this one,
amid speculation that the operation is battle space preparation for meddling with U.S. elections.
The 2020 election season starts far sooner than any sane person would like, but this is really early.
It suggests to many that deterrence is either not working at all or that it's working imperfectly.
U.S. deterrence has involved naming and shaming, lawfare, sanctions, and spooky direct messages to Russian government trolls,
but these seem insufficient.
The Foundation for the Defense of Democracies,
in a midterm assessment of the current U.S. administration's security policies,
coincidentally notes how difficult it's been to deter Russian hacking
and information operations,
and suggests that if such things continue,
the U.S. respond directly in kind.
And if they do, then look to the security of your Nintendo Switch, Mr. Putin.
Researchers at the Media Trust report the discovery of adaptive malware that's hitting Alexa 500 sites.
The security firm calls the campaign Shapeshifter 3PC.
The security firm calls the campaign Shapeshifter 3PC.
The media trust says it's worked through 44 ad tech vendors to afflict visitors to 49 premium publishers that rank among Alexa 500 sites.
As attacks were detected and blocked, the campaign would shift to new ad formats,
new delivery channels, and so on.
Security firm Netscout reports a wave of Co-App Reflection Amplification
DDoS attacks. The Co-App protocol is for the most part used by mobile phones in China,
and it's there that the effects of the denial-of-service attacks have been mostly felt.
But Co-App is expected to come into widespread Internet of Things use, and as it does, the
problem can be expected to spread with it.
Another breach has compromised a large number of Adhar numbers
from India's national identity system, over 100,000.
In this case, it wasn't a centralized breach.
Instead, the system the state of Jharkhand used
to track the work attendance of government employees
proved susceptible to scraping.
TechCrunch reported that the exposed data, which were apparently left without password protection since 2014,
included names, job titles, and partial phone numbers of 166,000 workers.
Bad enough, but unfortunately the file name on the workers' photos that accompanied these bits of PII
was simply the
individual's Adhar number. The Adhar number, which over 90% of Indian citizens have, is roughly
analogous to an American social security number, at least insofar as it picks out a single unique
individual. Breaches of social security numbers are bad enough, although with all the breaches
of the last ten years,
most Americans have arrived at a kind of learned helplessness with respect to their Social Security numbers.
They don't like them being exposed, and there are disadvantages to their compromise,
but unfortunately many, perhaps most, now feel that that particular horse has already fled the barn,
and the Social Security number is no longer used as much as it once was to establish identity. It said right on the card that it wasn't to be used for identification
purposes, although of course, inevitably, it was. Adhar is a more serious matter. You can use it,
or alternatively your thumbprint, to prove your identity when you register to vote or sign up for
some government
service, open a bank account, or conduct any number of other transactions.
The reasons for exposure aren't entirely clear yet, but it seems that Jharkhand left a lot
of data flapping in the breeze.
Just the way the state of Oklahoma recently did stateside, we observe.
So don't get cocky, kids.
stateside, we observe, so don't get cocky, kids.
Another exposure also hit India this week as the State Bank of India, or SBI, government-owned and the biggest bank in the country, left two months of SBI Quick data exposed without
so much as the fig leaf of a password to cover its shame.
The information was sitting on a server in a Mumbai data center.
SBI Quick is a customer-friendly
service that lets people who bank with SBI to text or phone in questions about their accounts.
Naturally, these communications held information better kept confidential. Phone numbers, bank
balances, recent transactions, whether a check had been cashed, things like that. None of these,
even taken together, amounts to what the
dark web black marketeers would call fools, but they can be damaging enough. One possibility is
that even such partial information could be used to target people, particularly people with big
bank balances, for social engineering attacks. And there's even an ad-har angle here too.
SBI, just a few days earlier, called out the UIDIA, the Unique Identification Authority of India,
the government agency that oversees the Adhar system for sloppy data handling practices.
So, gander, sauce.
TechCrunch reports that SBI has now secured the previously open database.
that SBI has now secured the previously open database. You'll be solving customer challenges faster with agents, winning with purpose, and showing the world what AI was meant to be.
Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
More than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist.
Vanta brings automation to
evidence collection across 30 frameworks, like SOC 2 and ISO 27001. They also centralize key
workflows like policies, access reviews, and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta
when you go to vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact, over
one-third of new members discover they've already been breached. Protect your executives and their
families 24-7, 365 with Black Cloak. Learn more at blackcloak.io.
And I'm pleased to be joined once again by Johannes Ulrich. He's the Dean of Research for the SANS Institute. He's also the host of the ISC Stormcast podcast. Johannes, it's great
to have you back. Today we wanted to talk about the effectiveness
of block lists. What do you have to share with us? Yes, so a block list is something that I'm
often being asked about with our system with the shield in a storm center. We are collecting a lot
of data about IP address and of course some of that data indicates that IP addresses are not behaving the way they're supposed to.
Same, of course, for domain names and the like.
And what I've found over the last years is that block lists, the way people typically implement them, are not really all that useful.
In particular, the way sort of a lot of the web traffic works these days.
And we publish a very short block list, just the 20 entries of the 20 nastiest networks,
if you want to call it this way. But even there, we often do see some false positives.
And the other problem is that the attacks that you really worry about, they use very flexible IP addresses. They change
their source addresses quite a bit. So really not that much use in spending a lot of time and effort
in implementing block lists. Now, what about if you're trying to block something like Shodan?
Yeah, so Shodan is this search engine that enumerates the internet of things. And we have actually done a test with that recently.
One of our STI graduate students did a research paper where what he looked at was whether or not being listed in Shodan actually makes a difference when it comes to the attack traffic we're seeing.
And we didn't really see a correlation there.
traffic we're seeing, and we didn't really see a correlation there. Now, one thing we did see,
however, is that the amount of traffic that you're sort of blocking your firewall that comes from researchers like Shodan, that's actually quite substantial. Not a lot of different IP addresses
that they're using, but it can be sort of in the 20,30% range if you're just looking at the number of packets that you're dropping at your firewall that are caused by research scans like Shodan.
There are a number of other search engines like that.
We also noted that a lot of the published block lists that you find for systems like Shodan are quite incomplete.
Block lists that you find for systems like Shodan are quite incomplete.
They use a lot more systems through their scanning than is actually sort of commonly being published.
So is this a matter of perhaps a block list not being the most effective place to use your time and energy?
Correct. Like, yes, it blocks some attacks, but are these really the hacks that you worry about? For the most part, what you find in
block lists are things that are sort of these common run-of-the-mill scans. And if you're
vulnerable to them, you probably have other problems. The other issue is always the false
positive issue. Like we published, for example, a list of crypto coin mining pools. And that's
sort of a useful list in the sense that crypto coin miners,
well, they're a very common infection tool.
And so seeing outbound connections to these crypto coin mining pools
may be an indicator that you are infected.
The problem here is that a lot of these tools, for example,
now hide behind networks like Cloudflare.
And once you're blocking Cloudflare IPs, well, you're also blocking thousands of other websites
that are associated with Cloudflare. So again, your risk of false positives is rather large.
The way I kind of like people to use these lists is,
the way I put it is,
you know, color your logs,
add color to your logs.
So instead of blocking,
just have tools that add automatic notes
to your logs saying,
hey, this may be a crypto coin mining pool.
So then you can manually check
and make sure whether or not
the system is infected or not.
All right.
It's good advice.
Johannes Ulrich, thanks for joining us.
Cyber threats are evolving every second,
and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
a cybersecurity solution trusted by businesses worldwide. ThreatLocker, a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.
My guest today is Daniel Fagella.
He's the founder and CEO of Emerge Artificial Intelligence Research,
a market research firm focused on the implications of artificial intelligence in business.
He believes that the most important ethical considerations of the coming years
will be the creation or expansion of sentience and intelligence in technology.
Generally speaking, AI is seen as kind of the meta-umbrella under which machine
learning sits. Now, a lot of people will argue that machine learning is the only thing that's
actually AI. Today, a lot of PhDs, and we interview a lot of them, are of the belief that it sits
under the broader umbrella of AI and that there's a lot more vistas to explore under the broader
domain of AI. Old school AI was kind of baking human
expertise into a bunch of if-then scenarios to hopefully shake out some kind of a pachinko
machine decision that a human would make as well. Machine learning is more hurling a million
instances at a bunch of nodes in a neural network to get that network to pick up on patterns and
determine what image is cancerous or what tumor images are cancerous or non-cancerous or what
pictures have a stop sign or don't have a stop sign, etc. So the dynamics are changing,
but broadly in terms of the two terms, those are good ways to understand them.
Now, one of your focuses is the ethical considerations of these technologies.
Where do you see us headed there?
At the highest level, unabashedly, my interest is in sort of the grander transitions of AI
in, let's say, the next 30 to 50 years, where I think we're going to come up with kind of
some post-human transition scenarios, whereby we have certainly hyper-capable and intelligent machines,
but potentially also exceedingly self-aware machines by maybe, let's say, 2060 or so.
And that if we were able to replicate sentience and legitimate general intelligence in machines,
the ethical ramifications of whatever is after people is astronomically important,
just like the Earth has a lot more moral weight to it because there's humans here as opposed to, let's say, just amoebas or crickets.
The Earth will have a lot more moral weight when it has astronomically intelligent AI
entities and sort of how the transition beyond humanity occurs, I think, is the great concern.
But when we speak about these things to business and government leaders, it's a lot more about
algorithmic transparency.
How do we know these decisions are being made correctly? Responsibility,
who's going to be responsible when this machine does something that could harm people or negatively affect people? So it's more about practical applications of individual use cases.
Well, you know, I think back to, I guess, the 80s in the early days, and we had things like
there was a program called Eliza that would
simulate being a therapist for you. And basically, like you said earlier, it was a bunch of if then
things that would parse your language and just keep on feeding you questions. But every now and
then it would shoot something back at you that would sort of make you sit up in your seat and go,
oh, wow, you just referred to something from earlier in the conversation. And certainly we've
come a long way since then. So I guess I'm curious, where do you think we are in the evolutionary
pathway towards eventual or would you say inevitable sentience? So we've polled three
dozen PhDs at a clip about the emergence of self-awareness in AI on a number of occasions.
The most recent bigger poll that we did on that topic had kind of the biggest lump in the bar chart happen in like the 2065, kind of 2060 range.
Whenever that day does come, Dave, it is sort of the grand crescendo of moral relevance.
So when we do broad polls across a swath of PhDs who've been in this space for as long, if not longer in some cases, than I've been on the earth, we see lumps there.
You know, the coming 50 years maybe, this is sort of a potentially reasonable supposition.
Pardon, I don't know if this is a naive question, but when that moment comes, will we know?
Yeah, that's not a naive question by any means, Dave.
I mean, it's a perfectly reasonable question. I will be frank. I think that it is a screaming shame. I put it way
up there on the furthest distal issues with the human condition that we don't firmly understand
sentience enough in terms of what it is, what constitutes it, how it emerges. Here's the deal,
man. Here's the deal. Things aren't morally relevant unless they're aware
of themselves. If you break your computer right now, just shatter it on your knee.
That's going to be kind of annoying because someone worked hard to build that because you're
going to have to go somewhere and get a new one, but whatever, you just go recycle it. But if you
do that with a dog, you will be fined and, and, and find maybe a lot of money and maybe, you know,
be relegated to have to do some therapy or something.
If you do that to a child, then you may just go to jail for the rest of your life.
And so the more self-aware, the more rich and robust the internal experiences of an entity are,
the more moral weight it has, and we don't know how that arises.
So what constitutes things that are morally relevant is predicated on this ephemeral substance of which we have essentially no understanding. That by itself, I just want to cry. And I think we get to that root or far enough to that root. We may
develop self-aware machines that are aware of themselves in ways that we just can't detect
because we don't end up chipping away at that core science of what self-awareness is. I think
there's nothing more important. It's a tough one. We may get to the AGI and to the self-aware AI
before we know how the heck to measure it. And you are darn well right about that. I hope not,
but you're right about that. So what do you suppose the implications are going to be?
As these technologies continue to develop and become more sophisticated,
how do you see our interactions with them changing? Stephen Wolfram, the guy behind
Wolfram Alpha, has this interesting hypothesis that there is a potential singularity-like scenario whereby humans wholeheartedly give up on their own volition
because they work hand-in-hand with systems
that recommend and coax and prompt them so well.
So these systems will get you up on time,
will get you feeling good,
will prompt you to the right action,
will set the right meeting,
will recommend the product that is so much better than the one that you would have guessed at randomly. Like you're just going
to be so much more satisfied with the food it orders, with the movies it suggests, with maybe
the movies it creates, builds entirely new programmatically generated films just for your
preferences beyond anything you could consciously ask for, but hyper tuned into your preferences
and a bunch of deep levels. And that these systems,
like people may just completely bail on volition because these systems can prompt them and coax
them through the world better than they can themselves. And that that's like a potential
trajectory for where we're going as a species. I'm not necessarily going to get dystopian here,
but I certainly think that there's a pull in that direction. I mean, famously, you know,
Facebook and these folks are kind of, you know, under fire now
for better or for worse for their ubiquitous influence over, you know, our actions and
attention and anxieties and whatever else.
And I think that that'll only become more and more embedded.
I think we're at least going to be aware of these influences.
I think, you know, you see GDPR and there is going to be some emphasis maybe around
children and technology.
I could see potential regulation around those things.
But all in all, in the coming years, I think we're only going to become more and more embedded until the machines are actually part of our thinking in a physical and literal sense, which would mean chips.
And I think that's part of the big grand trajectory is when that kind of meld occurs. So I kind of see a
melding in the metaphorical way that we have it now, only increasing, and then an eventual melding
all the way into extending our cognition in the very literal, embedded with the neurons kind of
way in, let's say, 20 years, possibly even a little bit less.
That is Dan Fagella. He is from Emerge Artificial Intelligence Research.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro.
It'll save you time and keep you informed.
Listen for us on your Alexa smart speaker, too.
The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies.
cybersecurity teams, and technologies.
Our amazing CyberWire team is Elliot Peltzman,
Puru Prakash, Stefan Vaziri, Kelsey Vaughn,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen,
Nick Volecki, Gina Johnson, Bennett Moe, Chris Russell,
John Petrick, Jennifer Iben, Rick Howard, Peter Kilpie,
and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in. With Domo,
you can channel AI and data into innovative uses that deliver measurable impact. Secure AI agents
connect, prepare, and automate your data workflows, helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role. Data is hard. Domo is easy.
Learn more at ai.domo.com.
That's ai.domo.com.