News

AVM Reliance on Third Parties & Building a Cascade

Jun 06, 2012

Featuring Darius Bozorgi Veros' president and chief executive officer

June, 2012 Podcast Transcript - Duration: 19 minutes

Moderator: Welcome to Veros Real Estate Solutions June Podcast. Darius Bozorgi, our president and Chief Executive Officer is back to continue to talking about creating and running a good AVM. His previous commentary posted in May can be found on Veros dot com under the news room section. Today he is going to focus on reliance of third party validations and logistics of how to build an AVM cascade. Darius, thanks for joining us again.

Darius Bozorgi: Thanks, Moderator, great to be here. Last time we talked I think we were discussing the proper care and feeding for an AVM an individual AVM. Today I would like to focus a little bit on third party validations of AVMs as well as putting those individual models together to create a AVM cascade, waterfall, pick right strategy, there's a lot of different names that people use for basically the same thing.

Moderator: Great. Can you provide some insight into the reliance on third party validations? How can we let them know that they have gotten a good value for the use of an AVM?

Darius Bozorgi: Well let's back up a little bit. In December of 2010 we had the update to the inter agency guidelines on appraisals and evaluations. There is an appendix B to that document which goes into AVM validations and the expectations that the regulators have of a lender who's going to use AVMs. Now when you get down to it I mean the key message in that, and we could have another full podcast just on that piece of guidance which is probably something we should do, but the key is independence and objectivity and performing the AVM validation so at the end of the day the lender has a choice. Do it themselves or retain a third party who puts themselves out as being in the business of performing AVM validations for that lender.

I've also seen lenders that do a hybrid approach where they do the AVM validations themselves and then they have a third party consulting or provider come and verify their internal process. So they end up kind of getting the best of both worlds. Now the key, if you go the route of picking a third party provider either in a hybrid model or as your sole source of AVM validations, the key is understanding that company's process and their competencies because another key element that came out of that December 2010 guidance was a pretty clear expectation around all valuations policies that a lender employees. If you, the regulators made it clear that if you're going to engage a third part, if you're going to outsource part of that valuation function whether you're going to hire an AMC for your appraisal work or in this case on this subject

you're going to hire a third party company who is going come in and perform your AVM validation, that you have the responsibility as the lender as if you were doing it yourself to understand the process that, that third party consultant is using to perform that validation and that it needs the independence and objectivity of requirements set forth in the guidance. Because it's not going to be good enough at the end of the day if it falls short to say well I'm the lender that's not my problem I hired this company to do it or I hired this AMC to perform my appraisals, that doesn't get you off the hook. So the key message that I would give to lenders is in picking a third party consultant make sure you do your homework on that consultant and fully understand that process and I will tell touch on that in more detail later.

Moderator: In regards to building an AVM cascade what does a lender need to know in order to build or optimize a high performing cascade?

Darius Bozorgi: Well, okay. So now in terms of getting into cascades from your AVM validation where you are testing individual models it's from those results that you then think about how you want to go about building a cascade. So whether again you're doing it all on your own using a third party consultant or you use a hybrid model, at the end of the day you get a bunch of different results and if you set up your validation correctly then you take those results and build a basically a rule set. And that rule set is going to tell you and based on the circumstances that you set up, which AVM you are going to use, in a particular circumstance. The most common cascade or waterfall that exists today is a very basic county based rule set.

That simply says in, for example Los Angeles County, if I have a property that comes through the system in LA county it's going to hit my number one ranked AVM first and if that has a hit, and a hit can mean several things. It can mean did I get a value or did I get a value that met other thresholds such as confident scoring thresholds. But if I get a hit I'm done. If I don't I move to my number two, my number three, my number four, so on and so on. Where we sit today building AVM cascade is as much of an art as it is a science. I think over time as we get more and more data available more objective and independent data we will be in a place where it will be more of a science. But right now you really have to know what you're doing in terms of putting these things together.

Moderator: What are some of the common pitfalls and how can they be avoided?

Darius Bozorgi: Okay so here now we're starting to blend a little bit of the third party validations and the – and building a cascade and they certainly overlap and if you don't set up a good validation experience or process you won't have a snowball's chance to build a good cascade. So first you have to make sure that the data you use in your validation is good. What are you looking to test? Are you looking at purchase transactions? Are you looking at refinance transactions? Are you interested in looking at performance by property type, single family, residence versus condos. Are you interested in looking at high priced properties versus low priced properties, do you have geographic regions, that are represented of your book of business, how granular do you want to get. I mentioned county before some folks and some of the high populated metro areas are certainly getting down to the zip code level in some cases. So have you put together a data set that is truly representative of your book of business, do you have enough of that data that's statistically significant where you can generate results that you can make good decisions from and end up potentially building multiple cascades out of which I'll talk about in a second. Where are you getting that data? Is that data coming from somebody who happens to have an AVM that may also be participating in that test? That goes back to that objectivity or independence.

I mean if somebody is actually providing that data and they themselves have an AVM I would say that you really have to ask some questions about the objectivity of that task relative to those particular providers or at a very least do some serious diligence to make sure you have the appropriate controls in place to deal with that objectivity. But we see that quite a bit. At the end of the day if you set up your test well and then you run that through the different AVM providers giving them a short leash that's another thing that I've seen. You know you talk about pit falls where test go out and one AVM provider turns that around in a day.  Another AVM provider doesn't turn that around for two weeks I mean that doesn't make sense to me.

You have to have a level playing field and you have to tight controls around AVM validation. Somebody taking two weeks to return test results in today's day and age I think raises some again questions about objectivity and independence. But you set up that process and let's assume for sake of argument you have set up an independent and objective AVM validation process. And then you have all of these results, now it's time to analyze these in order to build your cascade. Well there's a couple of pit falls with that, that you have to be aware of. First, what's going to be your, just as an example, what's going to be your analysis data set? So I've seen folks try to come up with one common data set which you should and ideally you should. And when I say come up with a common data set I am talking about determining outliers and excluding those from the data set where you think as, for an example someone may have the answer. So you see validations where somebody will require the AVM providers to provide last known sale price and sale date.

And if it looks like that an AVM has that last known sale price and sale date then that property is excluded but should be excluded for all. Now the problem when you exclude for all if you don't have enough data you can get down to very small let's say global data sets, that may not be statistically significant. So what people have tried to do in dealing with that which does raise some questions you just have to be aware of it, is instead of excluding for all it come up for criteria where they just exclude that property for that particular AVM provider. The problem there is and again all of these are pros and cons and you just have to know what it is you're looking at and what that means.

The problem with excluding for just one particular AVM model is you end up with data sets that are not constant for every AVM you're evaluating when you are trying to build that cascade. So you may have one analysis data set that has 50,000 properties for AVM one. The set for AVM two, maybe 45,000 properties and AVM three, maybe 60,000 properties. AVM four maybe 35,000 properties and where it gets bad you can really be talking about apples and oranges where you really get desperate data sets. So ideally in a perfect world you want to try to get the same common data set that where you are looking at all AVMs across the same common data set.

But we don't live in a perfect world so that's, that can be very difficult to do. But those are some of the common pit falls and one last one. Assume you build an objective validation process, assume you have good analyses data sets and you properly excluded you know properties where you think AVMs may have the answers or they weren't appropriate properties for one reason or another, and now you're ready to evaluate those. The last part of that puzzle is coming up with a ranking function, simple formula as to how you're going to determine who's better when it comes down to your view of accuracy. Do you look at plus or minus 10 percent of properties at plus or minus 10. Do you have something for outliers that is a penalty for evaluations in excess of 20, 25, 30 percent, whatever your view of the world is. Do you look at things like standard deviation? Do you include hit rate as part of that ranking function and then what weight do you put on that? The problem I see people run into is they come up with ranking functions that give very little determinations of true relative performance from AVM one to two to three. They end up with a function that leaves you with, I see scores that would be like a 99, 98, 97, 96 and you really can't tell. Well how much worse is the 96 versus the 99? There's very little statistical relevance behind that to make decisions around and building that cascade.

So for example if you have an AVM 1 and 2 that are truly neck to neck but AVM 1 is double the cost of AVM 2, how can you really take price into consideration in setting up your cascade which is a ligament concern if you can't truly identify relative performance between 1 and 2 and be able to point to that? But that's another pit fall in building cascades.

Moderator: Thanks, Darius that's certainly a very insightful commentary. We are actually running out of time, is there anything else you'd like to add.

Darius Bozorgi: Well I guess I would just highlight two or three items for the listeners. First, as I mentioned earlier, third party consultants who perform AVM validations have been a positive addition to the industry. But just make sure as has been set forth in the guidance from regulators that you do your homework on your third party partners and you make sure you understand their process in detail. Not only as it is when you hire them but and how that changes over time and you ensure for yourself that their process is independent objective. Understand where they get their data from. Understand how they do their validations, who is doing those validations. Understand how they're building their cascades or another term is preference tables, which is synonymous.

So that is one major point, second major point is when you're building these make sure that you are building cascades and you're engaged in validations that are representative of your book of business in what you are trying to accomplish. I see too many people that engage in idiom validations who build cascades and are disappointed at the end of the day because AVMs aren't performing as expected and it all goes back to– they set up a test process and built something that wasn't representative of where and how they do business and how they were going to use the AVMs in the first place. I mean the whole process was set up to fail from the beginning so make sure you build these things in a way that are representative of how you use them. The current trend in my opinion is we're going to see a lot more multiple cascades. A purchase cascade, or refinance cascade, and right now too many lenders, large and small, have just one cascade for everything they do. And I think they're starting to find out that, that's not going to work. My final point would be when people talk about AVM validations I think they just look at half of the equation. They think the validation is just a test. So I'm going to set up a test. We've been talking about the test and building the cascade from the results of that test and then they implement that cascade in their production system. And they think they're done, they think that's the validation. Now the regulators touched on this very briefly in the December 2010 guidance but it's a really important point, that's only half of your validation process. The other half is you have to monitor that cascade in that production setting to make sure that the results that are coming out of that, you have to have a process, to make sure that the results that are coming out of that cascade are representative and match your expectations that came out of the validation process itself. And where you see large deviations from that, from those expectations, you have to go back and evaluate that testing process itself.

Moderator: Good. Darius, we certainly appreciate your time and perspective into the AVM sector which is consistently evolving. For our listeners additional information can be found in our news room at Veros.com. Also please submit questions or feedback to media@veros.com, thank you.


Tags:
Category: Podcasts