GSI: Data Management Solutions for the Insurance Resolution System

Guaranty Fund Services (GSI)

 

GSI aims to assist Guaranty Funds with a suite of IT services including:

• Cloud hosting
We can plan and migrate your entire IT infrastructure to the cloud.

• Application Development
GSI can automate any existing paper processes in your office including collecting assessment data, proxy voting for your board, PTO forms, expense reimbursement, various UDS processes, setting up an internal intranet, Web site hosting, etc.

• IT security services 
Do you have an IT audit coming up? Do you have the results of an audit that you don’t know what to do with?  GSI can help you prepare in the leadup to an audit and assist with remediation.

• KnowBe4
We can assist or manage your KnowBe4 training and testing.

• Monitoring 
GSI can setup daily security monitoring and provide executive level reports on what issues currently exist in your environment.

 

Receivership Services

GSI aims to assist receivers with the extraction and conversion of claims data from an insolvent insurer. GSI can do this faster and cheaper than anyone else in the entire market. We can accomplish this because we’re a for profit subsidiary of a not for profit entity – we don’t have a bench of consultants who need to get paid nor do we incur the types of expenses that other IT consulting technology companies have. The receiver only pays for the services it uses. Furthermore, GSI isn’t looking to maintain data extraction and conversion for longer than it needs to. We look to partner with the existing company or receivership staff to ultimately transition the UDS work to them. GSI likes to step in at the early, chaotic, but crucially important parts of an insolvency to make sure data gets where it needs to go, then looks to step away as the transition becomes more orderly.

Contact GSI

Encrypted Phishing Email

We received an interesting phishing email attack this weekend – something I had never seen before.  One of the property managers at our building sent a number of us at NCIGF an encrypted email with the subject line: “New Message from your email contact  9801210”.  The body of the email contained an encrypted email message with a link to click to get the message – very standard stuff.  Looking at the link, it went to a microsoft.com domain that prompted you to enter your credentials.  The good news is that no one here did that, primarily thanks to the quarterly cyber training and monthly phishing tests.  Presumably, the phishing attack was either an attempt to harvest credentials (username and password) or – and this is the theory I find more plausible – the encrypted email, once decrypted, contained some kind of malware/virus payload.  We don’t know for sure because by the time we started doing analysis, the initial “open message” link redirected to “page not found”.

We reached out to the sender this morning and they confirmed their email had been compromised over the weekend.

I’ve never seen a legitimately encrypted email be used as a vector for phishing.  While we do communicate to the building manager via email from time to time, we’ve never had cause to send PII.  No one in the office was expecting documents or messages from the building that would necessitate encryption.  This is a clever attack because it hijacks the notion that encryption=safe.  That said, if you receive an encrypted email from someone out of the blue, it’s always a good policy to be skeptical.  Reach out to the sender by phone (not email!) and verify that they sent it.

I’ve included a screenshot of the email in question with names redacted to protect the guilty.

 

UDS 3.0 – JSON Based Layout

The UDS TSG seems poised to take up the task of revising the existing UDS standard. For those not familiar with the layout of UDS, they are fixed width, position delimited files, like you’d typically find on old mainframe type systems. They are not easy to work with and virtually impossible to modify. It’s not a stretch to say that the process of modifying a UDS file is nearly as complicated as completely revamping the standard because each row in a UDS file is a fixed length. Adding an additional field or increasing the length of a data element “breaks” any existing validation processes. Not only that, but any data that is longer than the spec allows gets lost/truncated. If a field is 30 characters long, but the data you want to put into it is 65 characters long, you’re going to lose 35 characters. There’s simply not a place for this data to go. The structure also has the unintended consequence of making UDS files virtually impossible to read and understand without a program to parse them for you.

Here’s a dummy A record we created for automated testing of the UDS Data Mapper:

HEADER02 55555AIN01IN99210201205082012050820120508P&CN 
A55555IN99135005111111 1234567 Joe Blow foo 3829 Coconut Palm Drive Indianapolis IN46201000020090622200805012010011800001Blow Joe 285 Fishpond Road Chicago IL606010000S12345678910000001200098+ 8UU 309 19990304 N 90 28 UBale of rags fell on employee, knocking her down,landing on top UU
A55555IN99105010222222 1234568 John Smith foo 3829 Coconut Palm Drive Indianapolis IN46201000020090622200805012010011800001Smith John 2709 Rifle Range Rd Chicago IL606010000S98765432110000001200098+ 8UU 309 19990304 N 90 28 UBale of rags fell on employee, knocking her down,landing on top UU
A55555IN99105010333333 1234569 Ralph Steadman foo 129 E Springbrook Dr Indianapolis IN46201000020090604200805012010011800001Steadman Ralph 285 Fishpond Road Chicago IL606010000S23456789010000001200098+ 8UU 312 19990304 N 61 52 UClmt stacking 2330 lb boxes up to the ceiling and felt pain in UU
A55555IN99105010444444 1234560 Bob Dylan foo 511 Benjamin Way Suite 113 Indianapolis IN46201000020090604200805012010011800001Dylan Bob 204 Nelsay Street Chicago IL606010000S34567890110000001200098+ 8UU 329 19990304 N 42 52 UCart ledged on trailer, while pulling on it, pulled musle in bac UU
A55555IN99105010555555 1234561 Django Rheinhart foo 511 Benjamin Way Suite 112 Indianapolis IN46201000020090606200805012010011800001Rheinhart Django 4403 Davidson Road Chicago IL606010000S45678901210000001200098+ 8UU 329 19990304 N 35 10 UWhile unloading trailer, a cart became loaded while attempting t UU
TRAILER 55555AIN01IN99210201205082012050820120508P&C00000000500000006000490+

Even knowing the UDS spec, it’s hard to tell what’s going on here. You really need a piece of software to break up the elements and tell you what’s what. Now, let’s contrast this with how this file might be represented in JSON:

{
  "record_type": "A", 
  "naic_number": 12345,
  "fund_location": "IN",
  "fund_type": 10,
  "date": 20190214,
  "coverage_code": [965005, 965010], //array of coverages
  "insolvent_company_claim_num": 123456789,
  "receiver_claim_number": 88888888,
  "insured":{ //insured structure encapsulates all the insured info in one place
    "name": "Bob Smith",
    "phone_number": 3175551212,
    "insured_address": {
       "street_address": "1234 Fake St.",
       "city": "Indianapolis",
       "state": "IN",
       "zip": 46202
    },
  },
  "policy_number": 99999999999,
  "date_of_loss": 20190101,
  "policy_effective_date": 20190101,
  "policy_expiration_date": 20191201,
  "claimants": [ //this is an array of claimants - no more counting claimants!  
  {
    "name": "Ralph Steadman",
    "claimant_address":{
      "street_address": "1234 Vegas St.",
      "city": "Indianapolis",
      "state": "IN",
      "zip": 46202,
      "birth_date": 20000101,
      "ssn": 1234567789,
      "coverages": [965005]
      },
  }, 
  {
    "name": "Inigo Montoya",
    "claimant_address":{
      "street_address": "1234 Princess St.",
      "city": "Indianapolis",
      "state": "IN",
      "zip": 46202,
      "birth_date": 19900101,
      "ssn": 1234567789,
      "coverages": [965005, 965010]
      },
  },
  {
    "name": "Robert Zimmerman",
    "claimant_address":{
      "street_address": "1234 Desolation Row",
      "city": "Indianapolis",
      "state": "IN",
      "zip": 46202,
      "birth_date": 19900101,
      "ssn": 1234567789,
      "coverages": [965010]
      },
  }
  ],
  "claim_reported_date": 20190130,
  "transaction_code": 123,
  "transaction_amount": -12345.00,
  "catastrophic_loss_code": 1234,
  "recovery_indicator_code": 88888,
  "pending_litigation": "Yes",
  "second_injury_fund_indicator": "N/A",
  "tpa_claim_number": 11111111,
  "issuing_company_code":,
  "wico_data": {  //struct for encapsulating all the wcio data in one location
    "injury_code":,
    "part_of_body":,
    "nature_of_injury":,
    "cause":,
    "act":,
    "type_of_loss":,
    "recovery":,
    "coverage":,
    "settlement":,
    "vocational_rehab":
  },
  "wcab_number": ,
  "employer_phone": 3195551212,
  "miscellanea": {
    "cell_phone_number": 123456789,
    "bank_info": "Chase",
    "maiden_name": "Karamazov",
    "location_of_goonie_treasure": "-77.0364,38.8951",
    "tpa_name": "Pat Nat",
    "tpa_location": "Florida"
  },
  "description_of_injury":"Lorem ipsum dolor sit amet, blandit persecuti eu cum, ne usu magna delicata consulatu. Elitr aperiam aliquid at eam, eu integre tractatos pro, ut diam debitis eos. Pro sint nobis vitae cu. Atqui lucilius iudicabit qui in.

In verear vituperata mea, sea ea aperiri vulputate. Dicat accusata inciderint cum te, brute euismod iudicabit vim et. Id quodsi disputationi cum, et adipiscing incorrupte per. Ancillae detraxit in eum, usu exerci dicunt ex, bonorum appetere democritum nam an.

Ex zril cotidieque has, eum ex vero legimus, nam iudicabit instructior eu. Ei sed altera suscipiantur, ubique noster an sed. Mei ne putant nostrum, et has causae eripuit. Sale graece antiopam ea mel. Cu verear habemus dignissim duo, ei omnes tibique mei.

Posse sonet qui ea, at nihil utinam maluisset sit. Nam posse intellegat ex, utamur probatus aliquando et per, mea oratio adolescens no. Mazim omnesque accusata eu has, ei quidam percipit interpretaris nam. Eius principes consequat no nec, graeci vivendo an sit, eum diam debitis explicari ut. His sint postea minimum ne. In modo alienum nec, mel sapientem forensibus te, ut audiam conclusionemque has.

Doctus efficiendi per ei, duo ad altera tractatos urbanitas, sed posse viris erroribus id. Ea saepe omittam iracundia mel. Nec ex facer neglegentur, in has quot verterem voluptatibus. Vim purto menandri et. Sed oporteat sententiae at, at omnis nominavi nec, nam purto habemus ne. Sed laboramus instructior voluptatibus ad.",
}

That’s a lot clearer. Everything is defined for you in a series of name/value pairs, so you know what data are what. It’s also not limited by length, so the full accident description and other fields that are typically longer than the UDS spec contemplates don’t get lost. Also, we’re able to encapsulate data that goes together: all the claimant data is in one block, addresses are consistently represented irrespective of who’s address it is, and data are nested instead of denormalized. Finally, perhaps the most exciting feature is that there’s a section for miscellaneous data not contemplated by the spec. Virtually anything can be added in there and it won’t break the spec. If you want it, it’s there. And if not, it can be ignored.

This is a clear win over the existing spec. I know adoption won’t be easy – all the funds and liquidators will have to change their systems. But I think the improvements are worth the investment. UDS, in its current incarnation (fixed files) is over 20 years old. It’s in dire need of a facelift.

NCIGF Launches Updated Web Site

I’m pleased to announce that we’ve completed our reorganization and update of the public facing web site.  We’ve taken pains to preserve all of the existing content, but organized in a clearer way with a robust search functionality.  There are a couple of new features I’d like to point out:

  • Search.  There’s a search bar on the menu to the right.  If you’re looking for something in the menu structure and can’t find it, search is the place to go.  In fact, I’d recommend starting there first.  
  • Events.  We’ve changed how events are organized and added more information to individual events.
  • Laws & Law Summaries.  These are organized in a searchable, sortable table format.
  • The Library.  The old site had a number of word, pdf and excel files in different places.  We’ve tried to organize all the existing documents into one sortable, searchable list.

We hope you like the new look and organization of the site.  That said, I know there are things we’ve missed or overlooked.  There might be a broken link or two, or language that needs some wordsmithing.  Going through the conversion process from the old site to the new was very much like seeing something new again, for the first time.  If you find things that need revision, please don’t hesitate to reach out and let me know.  

We’re also in the process of revising the Members Only site and you should expect to see those changes at the mid-point of next year. 

NCIGF Releases 4th Quarter Assessment Liability Report

By Amy Clark

The NCIGF Accounting department has released the 4th Quarter Assessment Liability Report. Download it by clicking here.

The Assessment Liability Report includes – by a statutory account of each property and casualty (P&C) guaranty fund – the maximum assessment (capacity), net assessable premium, actual and projected assessment/refund information, lines of business, recoupment provisions, assessment types, and procedures. The NCIGF publishes the Assessment Liability Report – which is compatible with the assessment reporting guideline – SSAP 35R – prior to each quarter-end to assist insurers in estimating their P&C guaranty fund assessment liabilities. Also included is a “5-year History of Assessments.”

More information about Guaranty Fund Assessment info can be found on the Assessment Liability Page.

IT Security – The Next Right Thing

I’m going to speak more generally about our IT security path at NCIGF.  There are a bunch of ways it can be done, some bad, some better than others, but what we did felt very organic in that what we did at each step felt like the next right thing to do.

We started with IT Audits: every year for three years straight, with the same company.  And every year, the same types of issues would get identified.  The firm would come in, run an general controls audit, pen test, do a social engineering test, look at our policies and give us very similar reports: patch that, this has a vulnerability, this user got phished, develop a comprehensive vendor management policy, etc.   After year two I found myself getting frustrated – we were doing things but we weren’t making progress, and I couldn’t figure out why.  For us, it was partially a staffing issue: we had two developers and me, the CIO, working on insolvency stuff, plus one engineer who also did helpdesk.  Some of the security stuff was delegated to a developer, mostly automated patching of Linux systems, and our Backup/DR/BCP was outsourced to a consulting agency, but no one was doing it full time.  My main engineer was running around putting out fires, but we weren’t making progress.  I reached out to someone who did our first IT general controls audit and brought him on as our Virtual Information Security Officer (VISO)

The first need was a helpdesk person so that my engineer could focus more on IT security.  We hired a college student studying computer science part time for that.  The VISO then helped us build a more comprehensive security program to stop the same things from coming up in the audit reports year after year: get automated phishing and security training for users, buy something that does automated vulnerability scans/reports, create and follow a patching strategy (patching/reboots on Sunday afternoon), more robust monitoring, log aggregation, and, finally, build a security dashboard – what we call our KRI or Key Risk Indicator, product.  We took a year off from audits and accomplished all that in 2017.  We then brought in a different IT audit firm for 2018 and, lo and behold, things were much better.  We were finally seeing different kinds of problems instead of the same problem, with different manifestations.

The main takeaways from this year’s audit was: Web of Trust and Managed Security Service Provider/Managed Detection and Response (MSSP/MDR).  We’re in talks with several MSSP/MDRs right now, with the intention of implementing one of them in 2019.

That’s the path I’d recommend, with a couple modifications.  You might not need a VISO if you follow the steps outlined above and you don’t need to roll your own security dashboard if you get an MSSP (KRI is still super useful tho – I’ve got it displayed in my office and look at it daily).  VISO’s are good for other things, particularly with helping set strategy and provide guidance.  The VISO also holds me accountable to the NCIGF Board, like a checks and balance.  The reality is that no one outside of IT has any idea what goes on in IT, but from a security perspective, you really need someone holding you accountable to the people on your board with skin in the game.  That’s the real value of a VISO long term – ongoing accountability.

Business Continuity/Disaster Recovery

I received an email inquiring what a reasonable backup policy would be, given the following issues/constraints:

  • Industry Best Practices;
  • Fear for loss of data and inability to restore/replicate; and
  • Costs associated with backing up data, i.e.  more expensive the more often data is backed up and stored for longer time periods

This is a great question and I figured I might as well put my thoughts down publicly.

Our backup policy:

In accordance with the Security Policy, a backup Schedule has been established. The schedule includes a daily backup for each server.

The backup appliance takes a daily virtual machine snapshot of every server hosted at our data center in our virtual environment, as well as the physical server named Backup, which hosts the virtual management console and the shared drives. The snapshots are then replicated to our external data center as part of our DR strategy.

Each week the backups are written to an archive disk. The archive disks are rotated such that the oldest disk that doesn’t include the first day of the month will be overwritten. A disk that does include the first of a month will be kept for a minimum of 1 year.

I know this is technical, so I’ll break it down a little bit.  First thing to keep in mind is that a Business Continuity Plan/Disaster Recovery Plan (BCP/DR) is separate from a Backup plan, although they are interrelated.  This means it’s possible to have a BCP/DR without creating a lot of backups.  In practice, a BCP means you’re replicating your entire server environment to an offsite facility – either hot, warm or cold.  Hot means you can flick a switch and be up and running at the offsite facility in seconds.  Warm means you can be up and running in minutes/hours.  Cold means it might take a day or several to be up and running.  You make the business decision on hot/warm/cold by figuring out what your Recovery Time Objective (RTO) is.  If your RTO is seconds, you need hot site.  If it’s days, you need a cold site.  Obviously, hot sites are more expensive than cold ones.  The other idea to keep in mind is your Recovery Point Objective (RPO).  How far back in time do you need to be able to go if something bad happens?  If you keep backups for 14 days, your RPO is 1 to 14 days.  You can, theoretically, go back between 1 and 14 days if a disaster happens.  If you can live with that, then you have a good backup strategy.  If you think you need more than that, you should modify accordingly.

NCIGF’s RPO is 1 day (although we can go back one full year if we need to) and our RTO is 4-8 hours, so we have a warm site.  We can be up and running at our disaster facility in several hours with yesterday’s backup data.

I hope all that makes sense.  It can be a little dense to suss out.  To answer the bullet points specifically:

  • Industry Best Practices – develop your business case for RTO and RPO.  Those values guide your policy and cost.
  • Fear for loss of data and inability to restore/replicate; – this is very, very real.  And this is what I want to stress the most in the entire post: if you don’t test your backups on at least a quarterly basis, you don’t have a back plan.  Lots of people create backups.  Then, when a disaster happens, they realize their backups are broken and can’t be used.  We do a partial restore test on a quarterly basis and do a full restore test once a year.
  • Costs associated with backing up data, i.e.  more expensive the more often data is backed up and stored for longer time periods.  This is definitely true – the longer back you want to be able to go (ie bigger RPO), the more you’ll pay.  But the biggest driver of cost is RTO.  How quickly do you want to be up and running in the event of a disaster?

Hope this helps.