The 2020 Approach to Real Estate Valuation

Marc Rutzen
9 min readJan 31, 2020

As of Dec. 31, 2019 there were about 78,000 active real estate appraisers in the U.S., and last year there was approximately $500 billion in commercial real estate transactions, according to Real Capital Analytics.

Consider, for a minute, the fact that just about every single real estate loan made for a residential or commercial property acquisition, anywhere in the country, requires a certified appraisal by at least one licensed real estate appraiser. The arbiters of truth, the appraisers, must have a rigorous and exacting approach to evaluating real estate investment potential. Oftentimes, however, their approach is not as exact as it could be, and this can be a detriment to the industry.

This is not an attack on the appraiser. It is a call-to-action to infuse the power of machine learning into the appraisal process. Here are a few issues we have with the way it’s done today:

Dated Information — Appraisers can only use closed sales transactions to estimate value in the sales comparison approach and for deriving cap rates. This means that by definition, they need to use outdated information.

  • Pro: It prevents them from overestimating market growth and getting too aggressive on valuations.
  • Con: It means that in markets with limited information (e.g. no comparable property sales in the last year), it is possible that they will not have enough data to effectively provide price comps and cap rates. If an appraisal is based on old data because that is all that is available in a market, how would they account for a broad surge in listing prices that will (in time) lead to higher sales prices?

Limited Information — The standards to which appraisers are held prevent certain types of information from being used in an appraisal such as active listings, demographic shifts, and market trends.

  • Pro: It prevents them from overestimating market growth and getting too aggressive on valuations.
  • Con: Appraisers can’t truly leverage demographic and economic market trends to inform value estimates. This is by design — appraisals are not meant to be forward-looking. However, the reality of any market is that the participants consider future growth when making a purchase. Utilizing the Akerson approach to estimating cap rates, future rent growth, and capital appreciation play a significant role in the cap rates, and therefore prices, that market participants are willing to offer.

Human Subjectivity — The biggest and most important drawback to appraisers is that they are, at the end of the day, human, and certainly not immune to being influenced and/or drawing subjective conclusions.

  • Pro: N/A
  • Con: Human subjectivity, itself.

When I worked as a real estate Investment Analyst, I regularly attempted to sway the conclusions of appraisers. For example, in accompanying a commercial appraiser on a walkthrough for a building we owned, I pulled my own comp set beforehand, stacked with the highest priced transactions in the market, and handed it to them along with a summary of improvements we made and the dollars invested in each of those improvements. I then walked through with the appraiser and highlighted the best portions of the building, purposely steering them away from anything we didn’t invest in improving.

At the end of that appraisal, the value came in approximately 12% higher than I estimated using conservative metrics and when I included both good and bad comps in the comp set. Did my guidance during the walkthrough have any impact on the appraiser’s opinion of value? It’s impossible to say… which is precisely the problem with any subjective approach to real estate valuation. Also related to this topic is the assumptions appraisers make of the incremental value from additional square footage, bedrooms, bathrooms, and amenities like garages, fireplaces, renovated kitchens and baths, etc.

In studying for certification, appraisers are taught to use regression analysis to calculate the incremental value of various characteristics. In practice, this rarely happens. Appraisers draw completely unscientific conclusions about the contribution of each factor toward value, and lenders and investors rely on these conclusions to inform massive investment decisions. Granted this is a bigger issue in single-family valuation than commercial, but it is crazy to think how much capital is allocated based on such a subjective approach to analysis.

For each of these reasons, it is not entirely safe to rely on the opinions of appraisers in estimating value, yet it is done this way everywhere, all the time. How many millions, nay billions, of dollars are inefficiently allocated every year because of this?

It’s difficult to say precisely, but a 2012 study by KC Conway, an executive managing director at the brokerage firm Colliers International, and Brian F. Olasov, a managing director at the law firm McKenna Long & Aldridge, found a wide discrepancy between the appraisal values and the eventual sales prices of the properties.According to a New York Times article on the study (check it out here: Accuracy of Appraisals Is Spotty, Study Says), of the 2,076 properties analyzed, 64 percent were appraised at values that exceeded the sale price, by a total of $1.4 billion, while 35.5 percent were appraised at less than the sale price, by a total of $661 million.In extreme instances (121 of the properties), the appraised value in this study was more than double the sale price, and in 132 examples, the appraisal was less than 70 percent of the sale price.

These results were mirrored in a 2011 study by Cannon and Cole which analyzed the accuracy of appraisals for the U.S. commercial real estate sector, using 1984–2010 data from the National Council of Real Estate Investment Fiduciaries (NCREIF). Comparing property appraisals with actual transactions, the authors documented that, on average, appraisals are more than 12% above or below the subsequent transaction price (correcting for the time lag between property transactions and valuations). These results are consistent with the findings of Fisher, Miles, and Webb in a 1999 study conducted for the 1978–1998 period. They documented an average absolute deviation of 9% to 12.5% between appraisals and transaction prices.

Woah, that’s a lot of inefficiency.

Now that you understand the issues with the appraisal industry, you can understand why we sought to correct these wrongs.

By simultaneously aggregating and analyzing the data used by appraisers, brokers and market analysis consultants, and using actual data science to quantify the impact of every variable that influences real estate investment potential, Enodo provides a truly objective approach to value. Here’s how we do it.

The Enodo Approach

1. The Data Pipeline

In order to provide insight in every market throughout the U.S., Enodo collects detailed data on property characteristics and amenities, market rents, and unit availabilities from 3 different property management software integrations, 10 different rental listing sites, and over 5,000 individual property websites.

Although data on amenities, square footage of units, etc. doesn’t often change, our data pipeline pulls updated rent and unit availability data on a weekly basis for approximately 2 million multifamily properties, covering every single market throughout the country.

But the data we receive from listing sites isn’t always perfect — it can be outdated, have the occasional keystroke or “fat finger” data entry error, or it may represent short term leases (which are often priced much higher) from a revenue management software.

To address this, we built a series of algorithms to remove outliers and bad data, and then combine the remaining good data to train our amenity premium and price prediction algorithms.

For example, if our cleaning algorithms detect a price is 2.5 standard deviations above or below the average for the same unit type in the same building, we throw out that data. In addition, we analyze in detail whether chunk and per square foot rent for each listing is reasonable based on property characteristics and average market rents, and we automatically compare new data to historical data for the same property to detect whether a rental listing was inadvertently priced incorrectly.

After the listing data is thoroughly cleaned and the outliers have been removed, we use our parsing algorithms to pick out characteristics like building and unit amenities, time on the market, floor, security deposit, etc. This provides us with detailed data on the supply of apartment units available in each market, which we can then aggregate at the property level and analyze.

2. Dynamic Market Clustering

The detailed supply-side data we collected from the data pipeline is then geographically joined in our database with demand-side (demographic) data from the Environmental Systems Research Institute (ESRI) at the census tract level. We primarily utilize Tapestry Segmentation data from ESRI, which integrates consumer traits with residential characteristics to identify markets and classify US neighborhoods.

Once these data sets have been joined, Enodo’s clustering algorithm utilizes both supply and demand-side data to intelligently define market areas. This process actually happens to live on the platform. Starting from the census tract within which a subject property is situated, we compute a statistical similarity score for each adjacent census tract based on the property and market characteristics of those tracts and select the most comparable adjacent tracts from which to form a cluster.

The algorithm then looks to the next layer of adjacent tracts and continues adding census tracts sequentially until sufficient data is accumulated to calculate rent and incremental amenity values. The data from census tracts are joined together to form markets based on the similarity of their supply and demand characteristics.

3. Predictive Model Training

Within these market clusters, Enodo trains a machine learning model to predict market rent and amenity premiums based on the demographic characteristics of the market and economic characteristics of the multifamily housing supply within that newly defined market area. There are often tens of thousands of units to utilize for model training.

By intelligently sub-sampling properties in each cluster, Enodo’s algorithm can determine the incremental impact of year built, number of bedrooms, number of bathrooms, and each individual amenity by holding everything else about the property and market constant.

The best way to describe how this process works is to imagine that we divide the market value of a particular unit into the value of the neighborhood, the property, and the unit itself, then calculate the proportion of each.

Neighborhood variables include the demographic and location amenity data. There is a certain value associated with just having a unit in a particular area, regardless of the amenities or characteristics at the building and unit levels. Enodo calculates the value of just being in a market and allocates the value among each market variable.

Property variables include things like the year built, property type, number of units, and building amenities. There is a certain value associated with just being in a particular property, whether you are in the worst unit or the best unit within that property. The portion of market value not defined by the market is allocated to the building.

Unit variables include the size of the unit, number of beds/baths, and unit amenities. The final portion of the value comes from the relative competitiveness of each particular unit to other units in the same building.

When all three levels are taken into account, a complete picture of value is generated. This is the core of the Enodo platform.

--

--