In our two previous blogs here and here we looked at how an effective account management strategy can result in profitable decisioning. In this blog we look at what is required to deploy account management strategies.
- Clean data
- Account management strategies (by decision area)
- Account processing system
- Data management platform
- Scoring engine
- Business Rules Management System
- Operational execution
We will look at the first 5 items (relating to data, analytics and strategy) in this blog. The next 5 items relating to software will be covered in the blog that will follow.
To productionalise account management strategies, consistent and clean data is required. The expected values for each data field used in the strategy should not change or at least should be effectively managed. This point is often overlooked, but is critical when values feeding decision keys change, new products/sub-products are added, etc.. Changes to the account processing system (e.g. to facilitate a new block code) can necessitate in changes to scorecards and account management strategies. If not, then accounts may fail to be scored or allocated to an appropriate account management strategy.
There are a variety of scorecards used within credit and especially within account management. Some decision areas may utilise multiple scorecards. For information on how to adopt credit scoring visit our previous blog on the topic. Some of the popular scorecards are detailed below:
- Bureau scorecard – lenders receive bureau data on a quarterly or monthly basis and ingest this into their monthly account management records. This scorecard is used across almost all decision areas as it is one of best ways to rank risk. Most bureaux have an account management bureau scorecard (as opposed to an account origination scorecard used for applications). This bureau score represents the probability that the account holder will roll into a default state within the next 6 or 12 months.
- Behavioural scorecard – this is a scorecard built based on the lender’s internal data and represents the probability that the customer will roll onto a state of default within the next 6 or 12 months. It used across decision areas in account management and collections. It is often used along with a bureau scorecard. A behavioural scorecard incorporates behavioural information over the last month up to the last 12 months. Typically accounts that are under 4 months old do not have enough behavioural data to result in an accurate score. These accounts either receive a default score or the behaviour score is not used in their branch of an account management strategy. Behaviour scorecards are often segmented into clean (never a missed payment) and dirty (previous missed payments) and are sometimes further segmented on current delinquent status. Behavioural scorecards are normally built at product level, but some institutions (typically banks) may use customer level scoring.
- Pre-delinquency score – this is a score that predicts the likelihood of a missed payment within the next month (or two) for accounts that are currently up-to-date. These scores are used to prioritise accounts for a customer service call or SMS.
- Balance Build/spend score – this is a model that predicts the likelihood of additional spend and is used in existing customer marketing to help target customers to whom vouchers should be given to encourage spend.
- Response models – these are models that represent a customer’s likelihood of responding to an offer. This is particularly used in up-sell or cross-sell decision areas.
- Attrition score – this predicts the likelihood of a customer closing their account. Actions can be taken to encourage the customer to remain.
- Right-time-to-call – this model predicts the likelihood of a right-person-contact if the call is made at a specific time and is used for collections or marketing calls.
3. Account Management Strategies
Once an account is scored, it is passed through account management strategies. As discussed in our previous blog, there are a number of decision areas within account management and each area will have its applicable strategies. A strategy is essentially a large decision tree that segments the population such that each account receives an action (even if that action is to do nothing). For example, for credit limit management, you may want to offer some customers a 20% increase to limit and others 10% and still others nothing. The decision tree will define your scenario and you would then use a terms of business table to define the actions to take on a scenario. These actions may be more than just offering 20% limit increase, you may include a maximum limit, a rounding amount (e.g. always round a limit down to the nearest $50 dollars), a communication code etc. Each decision area will have its own actions. An example of a simple credit limit strategy and action table is illustrated below.
A core basis for understanding the effects of a strategy is the concept of champion/challenger testing. Here we allow two (or more) strategies to run alongside each other, with one strategy being your champion or base strategy and the other being your challenger strategy that aims to test a particular action e.g. a higher limit increase for CLO or a bigger voucher value for marketing, etc. Accounts are randomly allocated to each one of the strategies and the effects of strategy A versus strategy B is measured at key periods of time (I cover this in the monitoring section). Champion/challenger testing is akin to randomised double blinded placebo controlled trials in medicine which are used to determine the effectiveness of a new drug on treating a particularly malady. Critical to champion/challenger testing is “randomising” of the population to ensure the that we do a like-versus-like comparison. I wrote about it in detail here. Let’s say you are assigning 50% of the population to credit limit offer strategy A (CLO-1) and 50% to CLO-2. Your authorisation strategy may also have champion/challenger testing (Auth-1 and Auth-2). You need to ensure that your Auth-1 and Auth-2 group essentially comprise of 50% of CLO-1 and 50% of CLO-2 too, otherwise there may be biases in your measurements particularly if, for example, CLO-1 is much more aggressive than CLO-2. The more decision areas you have the more you need to check that you do not have biases in your groups.
5. Monitoring of strategies
To determine the effects and value of a strategy, regular monitoring is required. Here one will monitor the champion strategy against the challenger(s). One will use key metrics such as:
- Spend (for voucher campaigns, authorisations and CLOs),
- Average limit (for CLOs, CLDs)
- Utilisation (for CLOs, CLDs and authorisations),
- Over-limit spend (for CLOs and authorisations)
- Bad balances (for CLOs, CLDs and authorisations),
- Bad over-limit balances (for authorisations)
- Re-activation rates (for anti-dormancy campaigns),
- Take-up rates (for cross-sell/up-sell)
- Roll-rates (for pre-delinquency)
The annual monitoring of the strategy should also incorporate the crowning of the new champion strategy (i.e. choosing next year’s champion strategy from the best performing strategy over the past year). Having an intimate understanding of the effects of your strategy (e.g. what are the unintended consequences of an aggressive authorisation strategy) is critical. Understanding correlation is a good place to start.