IT Security Pitfalls
Security Magazine recently published a great article by Bill Wosilius, 5 Common Pitfalls in IT Security & How to Overcome Them. It talks about how early planning and initially making better decisions about security funding and operations can reduce the risk of security breaches down the road. This is because the causes of most breaches are not the results of phishing attacks or misconfigured devices, but more likely because of the shortsighted security decisions that are made early in the game.
The five common pitfalls the article pinpoints are:
- Failing to budget for professional services along with product renewal costs
- Trying to do it all yourself
- Over-engineering, because you can
- Failing to understand your entire technology environment
- Failing to understand your company culture and lack of ability to move quickly
The article concludes with advice on how to overcome these pitfalls, breaking it down to awareness and translating the awareness into sound decision-making.
Cybersecurity Tech Accord Is Signed!
Big news in the cybersecurity world! In mid-April, 34 firms including Microsoft, Facebook, and Oracle signed the Cybersecurity Tech Accord. All firms committed to “stronger defenses, no offensive attacks, capacity building and collective action,” according to an article by Peter Suciu in TechNewsWorld.
Security in the tech world is being hit hard. Juniper Research expects losses to reach $8 trillion by 2022. The Cybersecurity Tech Accord was designed with a long-term plan – to protect the integrity of an expected one trillion connected devices in the next two decades.
The accord, still in its infancy, is offering the option for participating companies to agree to adhere to some or all of the principles. This is great news in that each of the companies could decide to do “what’s best” for their company, rather than having to bide by the principals in the agreement.
But there are possible downsides to this accord also. Read the full article.
GDPR – It’s Arrived!
It’s been all over the news lately. Data is everywhere. We’re surrounded by it, and it’s not all about social media anymore. Banks, retailers, and services of all types collect and use our personal data. General Data Protection Regulation or GDPR arrived on May 25, 2018. It’s a “new set of rules designed to give EU citizens more control over their personal data” according to the article “What is GDPR? Everything you need to know about the new general data protection regulations.”
GDPR terms dictate that all organizations must ensure that personal data collected is gathered legally. This applies to any organization within the EU. But what does that mean to countries not in the EU? A lot. We live in a global world and the rules are the same for any organization based anywhere in the world that offers services to businesses or customers in the EU. In other words, it applies to almost every major corporation on the planet.
According to an article in Security Magazine (www.securitymagazine.com), the main difference between companies that were ready for the May 25 rollout of GDPR and those who weren’t is a shift in perspective. The companies that were GDPR ready will reap some benefits. Such as:
- Increased collaboration across the organization
- Greater customer loyalty
- Increased confidence in cybersecurity management
Is your business GDPR compliant? Here’s a handy checklist.
Artificial Intelligence – Researchers Beware
“Life has gotten more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence,” begins the article Why Artificial Intelligence Researchers Should Be More Paranoid by Tom Simonite, published on Wired.com.
It’s true; life has changed, mostly for the better because of artificial intelligence (AI). Speech recognition, unlocking cell phones via facial recognition…the list goes on. And it’s been a very lucrative career for the creators behind these technologies.
The article cites a report about the downside of AI, warning that researchers need to pay more attention to the moral burdens resulting from their work. As expected, there are many malicious uses of AI technology. Some of the scenarios the article presents seem to be very Sci-Fi, but with AI technology rapidly evolving, who knows…Sci-Fi may soon become Sci-Fact.
Because of this rapid progression of AI, it’s easy to understand why ethics has been a major topic of discussion. The report focuses on the harm that could result from systems being easily modified for criminal means. Take autonomous vehicles for instance. They could be used to create crashes or deliver explosives. The consensus is that people developing AI technology need to discuss safety and security openly and actively. And a more paranoid mindset wouldn’t hurt either. Putting yourself into the shoes of an enemy might lead to creating safer technology before it’s released.
Shahar Avin, researcher at Cambridge University’s Center for the Study of Existential Risk and lead author on the report, “argues that in some cases, the AI community is close to the line.” Take for example, research by Google on how to synthesize highly realistic voices and Reddit’s current efforts to battle porn videos manipulated to star celebrities. The frightening fact is that today, hackers don’t have to be human anymore. Read the really interesting full article for more details.