NAIC Chief Endorses Web of Trust

I recently received a report from an international insurance regulatory meeting in which U.S. insurance commissioners were participating.  The urgency and assertiveness of our regulators hit me like a ton of bricks.

NAIC president, Eric Cioppa—the Maine director of insurance– opined that cybersecurity regulation cannot be prescriptive, but instead must be principles based because it is too hard for the supervisors to keep pace with industry.  First, cybersecurity engagement must come from the very top of the company.  A culture that prioritizes cybersecurity is critical due to the weakest link phenomenon.  Second, an insurer must focus on total preparedness for when a breach occurs.  Without engaging in table topping, a breach could be devastating to the company.  The supervisors are not looking to second guess a company’s program, but are trying to focus on broad cybersecurity themes.

As we continue to push forward in implementing the Web of Trust, it’s not for nothing to understand how U.S. regulators are approaching the same problems at an industry level and to recognize that it’s not all that different from the work we have been doing and are prepared to do more of.  Given that our members’ claims-paying function is an extension of the insurance industry, what regulators think on the topic should very much matter to us. 

In my view the reasoning transfers to NCIGF’s role in making certain that our members are at the most effective level of cyber security; f regulators can require carriers to “open their kimonos” as part of their consumer protection mission when a company is in business, we should be doing the same on security, also for the purpose of protecting policyholders and claimants. Our goals are even more narrow than the regulator’s.

Beyond the cybersecurity piece, the report should provide a flavor for the scope of discussions at the IAIS and the active role U.S. regulators are playing in it.  This is a global version of the NAIC (and as Keith Bell reminds us, the NAIC actually created the IAIS).  I point this out because while some of our colleagues continue to digest the “international” aspect of insurance regulation and its application to the U.S., this report gives a tiny peek into its tangibility, importance and durability. 

NCIGF Launches Updated Web Site

I’m pleased to announce that we’ve completed our reorganization and update of the public facing web site.  We’ve taken pains to preserve all of the existing content, but organized in a clearer way with a robust search functionality.  There are a couple of new features I’d like to point out:

  • Search.  There’s a search bar on the menu to the right.  If you’re looking for something in the menu structure and can’t find it, search is the place to go.  In fact, I’d recommend starting there first.  
  • Events.  We’ve changed how events are organized and added more information to individual events.
  • Laws & Law Summaries.  These are organized in a searchable, sortable table format.
  • The Library.  The old site had a number of word, pdf and excel files in different places.  We’ve tried to organize all the existing documents into one sortable, searchable list.

We hope you like the new look and organization of the site.  That said, I know there are things we’ve missed or overlooked.  There might be a broken link or two, or language that needs some wordsmithing.  Going through the conversion process from the old site to the new was very much like seeing something new again, for the first time.  If you find things that need revision, please don’t hesitate to reach out and let me know.  

We’re also in the process of revising the Members Only site and you should expect to see those changes at the mid-point of next year. 

NCIGF Releases 4th Quarter Assessment Liability Report

By Amy Clark

The NCIGF Accounting department has released the 4th Quarter Assessment Liability Report. Download it by clicking here.

The Assessment Liability Report includes – by a statutory account of each property and casualty (P&C) guaranty fund – the maximum assessment (capacity), net assessable premium, actual and projected assessment/refund information, lines of business, recoupment provisions, assessment types, and procedures. The NCIGF publishes the Assessment Liability Report – which is compatible with the assessment reporting guideline – SSAP 35R – prior to each quarter-end to assist insurers in estimating their P&C guaranty fund assessment liabilities. Also included is a “5-year History of Assessments.”

More information about Guaranty Fund Assessment info can be found on the Assessment Liability Page.

IT Security – The Next Right Thing

I’m going to speak more generally about our IT security path at NCIGF.  There are a bunch of ways it can be done, some bad, some better than others, but what we did felt very organic in that what we did at each step felt like the next right thing to do.

We started with IT Audits: every year for three years straight, with the same company.  And every year, the same types of issues would get identified.  The firm would come in, run an general controls audit, pen test, do a social engineering test, look at our policies and give us very similar reports: patch that, this has a vulnerability, this user got phished, develop a comprehensive vendor management policy, etc.   After year two I found myself getting frustrated – we were doing things but we weren’t making progress, and I couldn’t figure out why.  For us, it was partially a staffing issue: we had two developers and me, the CIO, working on insolvency stuff, plus one engineer who also did helpdesk.  Some of the security stuff was delegated to a developer, mostly automated patching of Linux systems, and our Backup/DR/BCP was outsourced to a consulting agency, but no one was doing it full time.  My main engineer was running around putting out fires, but we weren’t making progress.  I reached out to someone who did our first IT general controls audit and brought him on as our Virtual Information Security Officer (VISO)

The first need was a helpdesk person so that my engineer could focus more on IT security.  We hired a college student studying computer science part time for that.  The VISO then helped us build a more comprehensive security program to stop the same things from coming up in the audit reports year after year: get automated phishing and security training for users, buy something that does automated vulnerability scans/reports, create and follow a patching strategy (patching/reboots on Sunday afternoon), more robust monitoring, log aggregation, and, finally, build a security dashboard – what we call our KRI or Key Risk Indicator, product.  We took a year off from audits and accomplished all that in 2017.  We then brought in a different IT audit firm for 2018 and, lo and behold, things were much better.  We were finally seeing different kinds of problems instead of the same problem, with different manifestations.

The main takeaways from this year’s audit was: Web of Trust and Managed Security Service Provider/Managed Detection and Response (MSSP/MDR).  We’re in talks with several MSSP/MDRs right now, with the intention of implementing one of them in 2019.

That’s the path I’d recommend, with a couple modifications.  You might not need a VISO if you follow the steps outlined above and you don’t need to roll your own security dashboard if you get an MSSP (KRI is still super useful tho – I’ve got it displayed in my office and look at it daily).  VISO’s are good for other things, particularly with helping set strategy and provide guidance.  The VISO also holds me accountable to the NCIGF Board, like a checks and balance.  The reality is that no one outside of IT has any idea what goes on in IT, but from a security perspective, you really need someone holding you accountable to the people on your board with skin in the game.  That’s the real value of a VISO long term – ongoing accountability.

Business Continuity/Disaster Recovery

I received an email inquiring what a reasonable backup policy would be, given the following issues/constraints:

  • Industry Best Practices;
  • Fear for loss of data and inability to restore/replicate; and
  • Costs associated with backing up data, i.e.  more expensive the more often data is backed up and stored for longer time periods

This is a great question and I figured I might as well put my thoughts down publicly.

Our backup policy:

In accordance with the Security Policy, a backup Schedule has been established. The schedule includes a daily backup for each server.

The backup appliance takes a daily virtual machine snapshot of every server hosted at our data center in our virtual environment, as well as the physical server named Backup, which hosts the virtual management console and the shared drives. The snapshots are then replicated to our external data center as part of our DR strategy.

Each week the backups are written to an archive disk. The archive disks are rotated such that the oldest disk that doesn’t include the first day of the month will be overwritten. A disk that does include the first of a month will be kept for a minimum of 1 year.

I know this is technical, so I’ll break it down a little bit.  First thing to keep in mind is that a Business Continuity Plan/Disaster Recovery Plan (BCP/DR) is separate from a Backup plan, although they are interrelated.  This means it’s possible to have a BCP/DR without creating a lot of backups.  In practice, a BCP means you’re replicating your entire server environment to an offsite facility – either hot, warm or cold.  Hot means you can flick a switch and be up and running at the offsite facility in seconds.  Warm means you can be up and running in minutes/hours.  Cold means it might take a day or several to be up and running.  You make the business decision on hot/warm/cold by figuring out what your Recovery Time Objective (RTO) is.  If your RTO is seconds, you need hot site.  If it’s days, you need a cold site.  Obviously, hot sites are more expensive than cold ones.  The other idea to keep in mind is your Recovery Point Objective (RPO).  How far back in time do you need to be able to go if something bad happens?  If you keep backups for 14 days, your RPO is 1 to 14 days.  You can, theoretically, go back between 1 and 14 days if a disaster happens.  If you can live with that, then you have a good backup strategy.  If you think you need more than that, you should modify accordingly.

NCIGF’s RPO is 1 day (although we can go back one full year if we need to) and our RTO is 4-8 hours, so we have a warm site.  We can be up and running at our disaster facility in several hours with yesterday’s backup data.

I hope all that makes sense.  It can be a little dense to suss out.  To answer the bullet points specifically:

  • Industry Best Practices – develop your business case for RTO and RPO.  Those values guide your policy and cost.
  • Fear for loss of data and inability to restore/replicate; – this is very, very real.  And this is what I want to stress the most in the entire post: if you don’t test your backups on at least a quarterly basis, you don’t have a back plan.  Lots of people create backups.  Then, when a disaster happens, they realize their backups are broken and can’t be used.  We do a partial restore test on a quarterly basis and do a full restore test once a year.
  • Costs associated with backing up data, i.e.  more expensive the more often data is backed up and stored for longer time periods.  This is definitely true – the longer back you want to be able to go (ie bigger RPO), the more you’ll pay.  But the biggest driver of cost is RTO.  How quickly do you want to be up and running in the event of a disaster?

Hope this helps.

Cylance & macOS

**Update**

This issue has been resolved.

Announcement

Cylance is releasing a hotfix to the macOS Agent version 1494 to provide a workaround to keep macOS Mojave endpoints from deadlocking. This workaround disables the Memory Protection and Script Control features on the Agent when macOS Mojave is identified as the operating system running on the endpoint. The Memory Protection and Script Control policy settings will remain intact.

All macOS versions High Sierra (10.13), and lower, will continue to be fully supported by CylancePROTECT, including Memory Protection and Script Control.

The hotfix is being released September 20th, 2018.

Recommendation

Cylance recommends not upgrading to macOS Mojave until the CylancePROTECT Agent 1500 release in late October 2018.

Nobody should attempt to upgrade to Mojave until we verify that the new Agent has been installed.