Insights

The price is tight, but is it right?
Good business intelligence (BI) can help underwriters and pricing actuaries to rate business more accurately in day-to-day underwriting. It can also provide a timely indication that recent business written might not be performing according to original underwriting assumptions.


Underwriting review

The accuracy of underwriting is the most important determinant of an insurance company’s performance. If risks are underpriced, losses are made; if risks are overpriced, good risks are unnecessarily avoided and ‘cash is left on the table’. Thus, it is essential for underwriters to get regular feedback on whether the actual results are in-line with what was expected. Standard “actual vs. expected” reports do not change in format, so the data should be available as soon as the reserving process updates it.


Underwriters need to assess the reasons for losses
  • Are they from one specific segment within the portfolio or from across the portfolio?
  • Is loss experienced different from costing due to higher than expected attritional losses or specific catastrophic losses?
  • Are catastrophic losses due to natural variance or does the frequency of catastrophic events suggest the catastrophe load is too low?
For long tail lines of business, the underwriter needs a window of experience that includes a number of development years but with good data the picture will emerge more quickly and can be reacted to more quickly (e.g. for the next renewal period rather than the one after).

Supply and Demand dynamics for reinsurers
Underwriter Scenario:
Reviewing reinsurance underwriting accuracy
Current situation
Insurers review their portfolios at specific intervals (ranging from 1–4 times per year). Data is received from finance or line of business centres who send out the processed aggregated performance. The reported outputs are typically limited to recent underwriting years, with standard finance metrics rather than the full range of underwriting inputs. The underwriting performance report includes an “actual vs. expected” analysis on nominal (undiscounted) values. A discounted report will also be produced to compare different lines of business against each other.

Problem
The portfolio review process is supported by a single static report which is discussed over a short time period. This means there is limited opportunity to review the data, interrogate whether it is accurate or discern what the drivers are.

The data provided to underwriters does not typically come with relevant claims and reserving information. Consequently, the discussion is often superficial, resting solely on the results themselves since the data is not rich enough to give the reasons behind deviations in results. Consequently, emerging issues are not thoroughly discussed or recognised.

Ideal situation
The data relevant to underwriting performance should always be available to underwriters so that they can review when convenient. To fully understand the drivers of results, underwriters need the ability to drill down into the granularity of the underlying data to determine the drivers of losses. Filtering specific factors, such as catastrophic events and emerging risks, enables the team to review “as if” scenarios.

Underwriter Scenario:
Knowing the ‘actual’ value of a contract
at the point of underwriting
Current situation
An underwriter uses data from the client to underwrite a risk. During the underwriting process, the underwriter makes numerous judgements calls before arriving at the final risk premium (e.g. choosing the alpha, extracting losses which do not reflect the current risk profile, blending of experience and exposure curves etc.). Then, the underwriter adds the loadings for cost of capital, expenses, commissions and tax to the pure risk premium.

Problem
Loadings, such as internal expenses and capital costs often differ in underwriting systems and corresponding accounting systems. This means that a piece of business that appears borderline profitable at the time of underwriting may in fact be unprofitable once the loadings are applied in the accounting systems. Consequently, the underwriter does not know the ‘true’ value of a contract. As a result, underwriters can unwittingly write underpriced business or underwrite more conservatively to allow for higher than expected loadings.

Ideal situation
When writing business, an underwriter should have access to data that gives a realistic indication – a ‘true up’ – of the value of his contract. At the point of deciding whether to write the contract, the underwriter will understand the full financial impact of the deal. When the internal metrics for writing business are not transparent, it often generates frustration within an organisation. Particularly when the misalignment of view between underwriting metrics and finance metrics leads to a different assessment of overall underwriting performance and compensation.

Underwriter Scenario:
Visualising current underwriting data
Current situation
An underwriter receives a ‘data dump’ from the broker in an unstructured heterogeneous format. This data is cleansed, formatted and homogenised to make it fit for analysis. The underwriter inputs the data into the relevant modelling systems (e.g. RMS or internally developed tools), discusses the structure with the broker, establishes the cost to buy reinsurance cover, negotiates specific contract terms and, if appropriate, binds the business.

Problem
There are significant time pressures on underwriters to deliver quotes. This often means that they are unable to complete a thorough data cleansing exercise and conduct ‘best practice’ analytics. This can lead to very conservative underwriting to allow for the potential that exposures may be worse than depicted. In some cases the workload is so high that deals are triaged prior to any review because time is so scarce. Further obstacles arise when the underwriter is unable to represent the results in a clear visual format, as this undermines their ability to explain the exposure at peer-review or to reinsurers. This can result in wasted investigative resources and higher reinsurance rates.

Ideal situation
As far as possible, the data cleansing process should be automated. Software systems are increasingly capable of structuring and cleansing heterogeneous data sets. Brokers also need to make this easier for underwriters by providing data in a standard format. Speeding up the data cleansing process should free up time for underwriters to investigate niche issues and thus devote time to accurately measuring complicated exposures to avoid having to include conservative estimates due to uncertainty.

The underwriter should be able to upload the data into a software system that applies automated cleansing rules and analytics. Automatic reports from standard templates need the flexibility to investigate niche issues as each risk is unique – and so are the factors driving the exposure.
The Federated
Data Model
Taming the complexity of
regulatory reporting
Insurance models
for the technology age
Understanding changing
distribution channels
Making each business
line count
Brokers as
risk consultants
Four steps to
data-driven reinsurance
Portfolio steering
in the soft cycle
What is
responsive reserving?
The price is tight,
but is it right?
Improving data capture
through collaboration