For the engineer in you–and for those seeking to learn more about the ins and outs of various counting technologies, the new version of the Traffic Monitoring Guide is now released. The new version includes, for the first time ever, a chapter on non-motorized traffic monitoring (Chapter 4) and a format for bicycle and pedestrian count data which would allow it to be included in FHWA’s traffic monitoring dataset.
Active Living Research just released a useful brief on the value of and approaches to counting bikes for cities. We have seen some of this information before, see here and a webinar here, but it is good to have this new and reliable digest available in a highly visible venue.
Who would have ever thought that a 7 foot structure that did nothing more than count the number of vehicles passing by could create such a buzz?
We know that select cities in Europe have these counting devices. But that is Europe. I have often wondered what US city would be the first to the start line. It looks like Seattle wins the cake.
The counter is is made by Eco Counter, and the model is the Eco Totem. Here is some information from the manufacturer. The good news is that we tested the Eco Counter and it was pretty reliable.
Until someone can convince me that we have more consistently administered and robust measures of cycling walking–at least for comparative purposes and for the entire US–we continue to rely on the ACS.
Based on the summary from Wendell Cox, from ’10 til ’11, bicycling and walking each increased 0.03. Bicycling is now at 0.56%; walking is at 2.82%.
With the bicycle arms race underway (which is a good thing because peer pressure always helps communities do more), it’s really hard to know who is winning. If you read the blurbs, every city claims to be winning because every city is seeing gains in their bicycle counts. But how consistent are the counting approaches? How robust are the counting approaches? Even with consistent and robust approaches, how does one account for geographic or climate variations. Does a high bike count in Minneapolis during a sunny and 70 degree day ensure the same in mid January? Probably not.
What is the best way to compare cities with high counts in the summer, and low counts in the winter to cities with balmy weather all year round? One way, borrowed from the motorized traffic world, is to calculate an average daily count for the whole year (aka AADT). The National Bike and Pedestrian Documentation Project has done just that, offering factors to annualize your hourly bike and pedestrian counts. While this was a notable step forward 4 years ago, it’s far from definitive.
First, the idea that we can create one set of factors for the entire country leads to major inaccuracies. Clearly, cultural, climate and terrain vary from city to city, which impact riding habits. Furthermore, it may lull cities into thinking they don’t need their own continuous automated counts at all since it’s being done at the national level.
Second, annualizing counts based on a one or two hour count inherently lead to more inaccuracies. There’s a reason traffic engineers abandoned the practice decades ago. Even with relatively stable traffic counts, one or two hour counts leads to wildly varying estimates. Basing estimates of annual average daily bicyclists (AADB) on one hour counts can be off by as much as six times actual AADB!
Here’s the good news! Cities around the country are installing their own automated bicycle and pedestrian counters that capture traffic 365-24-7. Permanent automated counts sites provide cities the data they need to create their own, city specific annualization factors. And, portable automated counters can count for a week at a time at various locations around the city giving a much better estimate of volumes at the location than an army of well meaning volunteers.
The time has come for the bicycle community to realize what motor traffic engineers have known for decades. Too small a sample size (e.g., a 2 hour bike count) can be WORSE than nothing. Let’s put those well-meaning volunteers to work doing something more meaningful, like moving, protecting, and maintaining our automated bike counters. Only then can we robustly compare bike counts on the Midtown Greenway in Minneapolis to those on the Lance Armstrong Bikeway in Austin.
The Workshop was held at the Kaiser Permanente Educational Theater Program Facility in
After considerable discussion and vetting of different ideas, the following question was posed for voting after the morning session: what are the needs for which AT indicators should be developed. The top three responses are:
- For AT data to be better standardized (like we do for cars that would enable comparison and scaling), 18 votes
- To better assess the impacts of various AT projects (before and after evaluation), 14 votes
- To understand the needs of disadvantaged groups or other small areas, possibly focusing on key demographic populations, 13 votes