No, President Trump can’t text you: All about Presidential Alerts.

ebs_test_screen
A test of the Emergency Broadcast System via 1990

On October 3, 2018, FEMA and the FCC will be testing their Integrated Public Alert and Warning System (IPAWS) which is an expansion of the public broadcasting alerts you once saw on TV. (They’re still there, actually, if anyone watches cable late at night.) The IPAWS consists of traditional alert channels like radio and TV and new alert channels like cell data.

The announcement of the test has resurrected a 2016 rumor that claims President Trump will be able to mass text Americans directly. One of the kinds of alerts that exists under IPAWS is an un-opt-outable “Presidential Alert” which some news reports fear gives unilateral control to the President to push messaging out to our cell phones. This rumor is absolutely false for a number of reasons.

  1. The Presidential Alerts are not new and have, in fact, been tested before: twice under President Obama (November 2011 & September 2016) and once already under President Trump (September 2017).What’s new about this test is that FEMA is testing delivery of the alert over broadcast (TV) and data (cell phones) at the same time to make sure that doing both at once doesn’t break the system.
  2. It is extremely difficult to send out a national alert. There are both legal and technical requirements that must be met before an alert is able to be delivered. It is vanishingly unlikely that Trump would be able to or choose to send whatever messages he’d like as alerts. Additionally, there IS oversight of the process from thousands of stakeholders with a vested interest in keeping alerts trustworthy and professional. (See more below).
  3. It is true that you cannot opt out of the Presidential Alerts. By law, national alerts are incapable of being turned off. You may discover a way to switch off alerts in your phone settings, but in fact you will be turning off local alerts which–let’s be honest–are probably more useful to you than national alerts. If you’ve ever received an Amber Alert or a severe weather alert by text, that is an example of your local alert system. Those kinds of alerts originate from your county emergency management office or police department, etc. You can turn them off, but it’s not recommended. The national alerts you can’t turn off because they’re built into your SIM cards. But most importantly, the IPAWS is for emergency alerts only and is controlled by FEMA with support from the FCC. Trump won’t be texting you on a whim.
  4. Finally, I want to correct another mistaken rumor I’ve been hearing. The Wireless Emergency Alerts (the ones that go to your cell), do not collect any data. The system is designed to push data only. It does not collect location data or personal information. While it may feel like the alerts know where you are, in fact, it only sends data to the tower. If you’re near the tower, you get the alert.
z_us_disaster_map1
Most of us live in a hazard zone. Sign up for alerts at your state’s Emergency Management website.

Let’s unpack these further:

First, it’s important to note that so called “Presidential Alerts” are not new. They have existed in every iteration of the national warning system since its inception in the 1950s. Even more importantly, they have never been used. Never, not once under any president. I note this so strongly because many of the reports seem to imply that the controversial Presidential Alerts are something that Trump made in order to more forcefully influence the public. That is false.

Here’s some history to help us see how:

36131365861_3c96b780fb_bIn 1951, President Truman established the Emergency Broadcast System (at that time, known as CONELRAD) which used AM radio to warn citizens of inbound nuclear attacks. In the ’60s and 70’s it was expanded to include FM radio and TV and developed further at the local levels. In the 1990s, it changed it’s name to it’s current one “The Emergency Alert System,” (EAS).

In 2006, in response to the difficulties surrounding Hurricane Katrina, President Bush signed Executive Order 13407 ordering DHS (Dept of Homeland Security) to modernize and unify the nation’s existing public alert system which was made up of these 4 separate systems:

A) The Emergency Alert System (EAS) This system sends text and audio based messages over TV and radio stations, cable, and satellite services. These messages are NOT relayed over the NOAA/National Weather Service radios which many Americans use in their preparedness kits.

B) NOAA Weather Radio is a 24-hour network of VHF FM weather radio stations that transmit directly from local National Weather Service offices. It’s especially used in the mid-west.

C) The National Warning System (NAWAS) which is an automated telephone landline-based warning system. While still in use, it is slowly being made obsolete as fewer and fewer households have a landline.

presidential_alert
An example of what the Wireless Emergency Alerts look like via FEMA

D) And finally, the Wireless Emergency Alerts designed to issue warnings over mobile devices like cells, tablets, and pagers. This program was funded by the Warning, Alert and Response Network (WARN) Act of 2006. It allows for automated messages (specially wrapped in software code) to be sent to cell carriers who in turn push the message via their towers to all cell phones in range. The three types of messages legally allowed are: Presidential Alerts, Alerts involving imminent threats to life safety classified as “extreme threats” or “severe threats” (most often weather and terrorism-related alerts), and AMBER alerts (child abduction emergencies). Presidential alerts have never been issued. The only use has been other agencies with access to the network like the National Weather Service and FBI.

These 4 systems were unified under the IPAWS  (Integrated Public Alert and Warning System) platform which helps systems with different tech requirements share information between them.

ipaws-architecture-500x304
A chart of how IPAWS works. Click for larger view.

Now for the legal requirements I promised:

Legally, IPAWS must only be used for emergencies (and testing, obviously). The Integrated Public Alert and Warning System Modernization Act of 2015 specifically says:

Use of System.—Except to the extent necessary for testing the public alert and warning system, the public alert and warning system shall not be used to transmit a message that does not relate to a natural disaster, act of terrorism, or other man-made disaster or threat to public safety.

Additionally, the FCC has various other established laws and protocols which reinforce the rule: emergency use only.

Finally, as I mentioned, there is also a body of stakeholders who monitor IPAWS use and read policy and technical updates. Here is the list which includes State, Local, Tribal, and Territorial government group members; the International Association of Emergency Managers (an independent certifying and research body); disability rights groups, and independent legal/policy, and technical experts.

2017_29_4_3And finally, the technology barriers behind it all:

It is far more complicated to send out an IPAWS alert than tweeting as you can see from this protocol guide. A protocol guide is the manual for IPAWS technicians which explains how to properly send out a message and how to troubleshoot the system. This one is mostly software code. In order to send out a message, you have to know how to code (among other things) the header and ender that bookends the message. Also, the message length is limited to about a sentence or several phrases. Therefore, I think it’s far more likely that Trump would choose to tweet–even if he could overcome the legal and technical barriers. Snopes agrees:

Although it is true that an FCC emergency alert system function enables any sitting president to send emergency texts to all Americans (and that only messages from the president cannot be blocked), any other information is pure speculation: nothing substantiates the idea that President Trump intends to misuse the system, or that the FCC would allow him to do so.

Further Reading

Featured Image: Woman standing next to air raid siren, WWII. Unknown origin, though possibly Toronto.

Advertisements

Follow Up 2 Years After Cascadia Rising

I can’t believe it’s been two years since the enormous multi-state full scale exercise, Cascadia Rising 2016. Since the initial, exercise, Washington State Emergency Management Division (and all its partners) have been busy assessing the lessons learned and planning discussions and smaller, issue- specific exercises like Fractured Grid (below). Smaller agencies, like WaTech (Wa State Consolidated Information Technologies or the agency which hosts all the other agencies’ data, phones, applications, email, and internet) are working hard on Continuity of Operations Plans (COOP) which describes how the agency will continue to function if people in the chain of command, workers, or other operations are missing or inaccessible. The Feds and the State have been discussing new response priorities based on improved predictions after a couple of earthquakes and tsunami research and policy reports were (and are continuing to be) published. Planning and preparedness is never done as scientists continue to study geologic processes in the area, as responders in places with similar problems (New Zealand, Japan, Puerto Rico) report their own lessons learned and best practices, and as social and political advocates move the needle on the public’s opinions about agendas and funding priorities.

I’m attending today’s Fractured Grid which is a tabletop exercise designed to bring together utilities and critical infrastructure stakeholders for a mental exercise based on the Cascadia Rising scenario. Private utilities companies, FEMA, and representatives from many state agencies like Department of Commerce, Department of Transportation, and Department of Public Health, have gathered in a (chilly) room at the Legislature building to talk to each other about how to coordinate information sharing and set priorities for restoration should a 9.0 earthquake strike our coast.

This more informal group thought-exercise is an important component of Washington State’s Emergency Management Division’s multi-year exercise plan–an improvement plan hoping to link lessons learned from Cascadia Rising 2016 to another test of our response capabilities coming in Cascadia Rising 2022 (hopefully). These long term, multi- year improvement plans are standard practice in the emergency management field and it’s likely your state, county, or city is doing the same in some unassuming conference room.

Sometimes, it can feel like nothing happens in government, like its a black hole filled with talking heads and useless meetings, but sometimes quiet work is being done. Little by little Washington State is working to prepare to serve the people in their time of need. There ARE dedicated, capable, and knowledgeable public servants. They’re not working in secrecy, only outside of the spotlight in a chilly conference room somewhere.

Hep A, Homelessness: An emergent crisis

In 1854, London saw yet another deadly epidemic of cholera. London was burgeoning under the influence of the Industrial Revolution, but it’s medieval (literally) sewers, trash heaps, gutters, and under-your-house cesspools were overflowing with slaughterhouse offal, grease-rendering run-off, and market animal excrement. The city began dumping it’s unwanted filth into the Thames river which became smelly and dangerous.

a_court_for_king_cholera
A political cartoon from 1852. See more here.

Many physicians, scientists, and politicians at the time believed that disease was transmitted by bad air–in particular, air putrefied by rotting matter (called the miasma theory). This wasn’t a totally stupid idea. Scientists of the time could observe fungus spores on their microscopes and could observe the close correlation between coughing and death. They could see the smoggy air out their windows and the filthy streets at their feet. Unfortunately, the cholera outbreaks continued persistently despite treatments of the time.

snowmap_points
Dr. Snow’s epidemic dot map. Red dots and blue faucets are a modern addition to make it easier to see the patterns. See the original black and white here.

Dr. John Snow (not that one), had another idea. He theorized that cholera was spread through water tainted with germs (the germ theory). When an extremely bad outbreak of cholera occurred in Soho (127 people died in three days), Dr. Snow with the help of Reverend Whitehead carefully interviewed the patients and the community members. He was able to identify what each of the patients had in common: a water pump. He took samples of the water, but it was inconclusive. He asked the city to take the handle off the pump. They did so and the infection seemed to decline. (In his journals, he is careful to note that the disease was already declining because people had fled when the outbreak occurred, but that taking off the handle did seem to help reduce the infection rate.) He carefully drew a dot map of all those who had died from cholera in the neighborhood and noticed an anomaly which further supported his water-borne theory. None of the employees from the Broad Street brewery got sick. As part of their wages, they were given beer to drink. And the water for the beer is boiled–killing the germ.

And that’s how modern epidemiology was born.

Today, much of the epidemiology work is done by the Center for Disease Control (CDC) and the World Health Organization (WHO). Like Dr. Snow, they monitor disease outbreaks, make recommendations, and respond to emergencies with vaccines or other appropriate care.

In March 2017, the CDC recorded an enormous spike in another disease, like cholera, spread by the fecal-oral transmission route: hepatitis A. Hepatitis A is a viral infection which targets the liver. It is spread by fecal contamination and is completely preventable by good hygiene (washing hands, drinking clean water) and vaccinations. It is most often found in places of crises with developing or destroyed infrastructure. So why was it showing up in San Diego, Salt Lake City, Detroit, and New York City?  The Huffington Post reports the CDC’s numbers in this terrific graphic.

5aaad6941f0000150316a843
HuffPo crunched the numbers: hep A on the rise in 2017.

In San Fransisco, escalators were being shut down for cleaning. Excrement had piled up in the gears. In San Diego, feces on the sidewalks and streets were being dried and aerosolizing (becoming airborne). It became so bad that San Diego washed it’s streets with bleach and is scheduled to continue to do so every two weeks. Detroit saw a national record (post release of a vaccine) of 837 cases of hepatitis A–bad enough for the state health department to issue a travel warning. I could go on.

spray-951d3fc68db8457c0ca6ee1f7381718d219e5bd6-s800-c85
NPR reports street cleaners using bleach in San Diego to combat Hep A outbreak.

If Dr. Snow’s observations have taught us anything, I think it’s taught us to look at outbreaks as warning sign of a city in crisis. Sick cities make people sick.

In this case, as the Washington Post (and others) posit, the crisis is homelessness. Hepatitis A outbreaks correlate to cities with the highest homelessness and drug-user populations (people who shared needles used to be the highest population of hep A patients in the 90’s but that has now changed). Homelessness is caused by many factors, but many in the Bay Area argue that the rise in housing costs coupled with a decline in public facilities caused this particular issue. Others argue that an increase in spending on public services has attracted homeless persons to these areas. But whatever the case, it remains, that the streets are unhealthy.

Additionally, the outbreaks have been hard to stem because the vaccine for hep A is a two-part dose administered several days apart. It can be difficult for health workers to follow up with homeless patients to get their second shot. This allowed the hep A outbreak–which would normally be contained by herd immunity–to get large enough to enter the broader population (only briefly, and to no great harm thanks to quick-acting responders).

On the other hand, the hep A outbreaks are (at least in San Diego) causing vulnerable homeless populations to become even more vulnerable as the city is taking a more proactive stance on arresting homeless people. The justification is two-fold: a) they need to move so that the city can wash the sidewalks and b) they are disproportionately falling ill due to unsanitary conditions. It might be better to remove them to a clean area for prevention and treatment.  In the meantime, the homeless want to know what will be done with them and their possessions now that they have been moved.

Fortunately, there is some good news. As of April 2018, the CDC reports the following:

With the slowdown in reported hepatitis A cases across California, CDPH has demobilized the outbreak response and continues to monitor reported hepatitis A cases statewide.

There is similar news from all the infected cities. Thanks to a history of epidemiology and the quick action of health-care professionals, we have quelled hepatitis A in our cities. For now.

Dr. John Snow’s research into cholera helped to inform London’s city planners decisions over time. The pump at fault was drawing water three feet from an underground cesspit which had begun to leak. Because of this and other incidents, we know how to keep our water and sewage separated.

What will these modern hep A outbreaks teach us about city planning?


Further Reading

 

How to Census the Homeless

Last time, we talked about how Emergency Managers can contact vulnerable populations–for instance, the homeless–with evacuation notices. We discovered that social connections which make communities more resilient as a whole make the homeless in those communities more resilient too.

But as an Emergency Manager, I know that resilience often depends on the effects of long-term, complex social issues. For instance, Haiti sometimes finds it more difficult to recover from earthquakes than it’s sister country, the Dominican Republic, because Haiti is poorer, has non-resilient infrastructure, and social policies which (debatably) prevent economic recovery. Similarly, in America, policy discussions centering around social-political issues like homelessness impact the long-term resilience of a community to disasters.

Policy discussions are important and they rely heavily on statistics. As you know, fair representation in a community means being accurately counted. How can policy makers and advocates make decisions about policy and funding without a careful understanding of the demographics and size of a vulnerable population?  As you might imagine, counting a scattered and mobile population is extremely difficult. This article from the Seattle Times demonstrates how parsing the census data can greatly change the picture; depending on how you ask the question, Seattle has between the third and sixth most homeless in America.

The severity of Seattle’s homelessness crisis is different depending how you measure it — but no matter which way you look at it, we’re in the top 10 worst in the country.
Courtesy of the Seattle Times

 

So how do you count the homeless population in your town? In America, most cities choose to do an annual, night census of the homeless to get a sample.

Counting the unhoused once a year to get a sample.

Most cities have an annual, nightly census where volunteers physically visit known homelessness sites and gather demographic data from those sleeping outside. Sometimes this is accompanied by census data gathered from people using homelessness services. This method of data collection can be problematic for several reasons.

a) A yearly census may have data gaps. While it’s true a yearly census is good for trends over time, it gives only one snapshot of the homelessness situation on this night. Some argue that more frequent censuses would give granularity to the data.

b) Each jurisdiction and agency draw the geography differently. As the Seattle Time’s points out, because of the hodgepodge way each locale accesses funding, each city and county may count people differently. “New York City, for example, counts the homeless people inside its city limits while L.A. counts everyone in L.A. County except the cities of Pasadena, Glendale and Long Beach…. Denver and Boulder group together and count all the homeless people in the six counties around them….” This can make comparing data nationally–or even regionally–a headache.

photo
Pic courtesy of Associated Press

c) It doesn’t count those in shelters (sometimes, depending where you are). This isn’t necessarily a problem depending on what you want the census to tell you. Some cities, like New York are under legal mandate to house their homeless, so counting those strictly on the streets matters to them. But maybe your city wants to know more about how many people might need a certain governmental service. In that case, this statistic might not help you, beware.

d) It doesn’t count the “concealed homeless”–those living on a friend’s couch or in their cars. Like point c, this is a matter of semantics; neither good nor bad, but worth noting.

volunteer
Pic courtesy of San Bernardino Point in Time Count

Is crowdsourcing better? No matter what you do, it’s hard to get a clear estimation of “hidden” individuals. Cities are big and individuals are small. Crowdsourcing has helped gather data on large, complex problems before, maybe–the thinking goes–it can do the same here.

New York City has tried just that with their 311 app. Designed to accumulate citizen complaints for a variety of municipal problems, it was expanded to allow users to help identify the homeless. App-users can take geo-tagged pictures of the homeless and tag them with statements like “NeedsMedicalAid” or “AggressiveBegging”. Unfortunately, some homelessness advocates feel this app has led to shaming and harassment (especially–in my opinion–given that the complaints can be viewed by the public). (View the story here.)

On the other hand, a competing app in New York named “WeShelter” has a gentler image.  App-users “unlock” donations from sponsors which are given to several of New York’s homelessness organizations. Users have an opportunity to share their location with the app by clicking “I’m near a person who is homeless” and can also provide other information like whether or not the person needs non-emergency assistance. This is an interesting combination of location data gathering with activism. And there are many, many more apps with various methodologies out there.

GiveSafe Beacon. Pic courtesy of Smart Cities Council

In Seattle, an app called GiveSafe distributes beacons the size of a quarter to individuals in need. The beacon has a bluetooth that connects to an app on your phone. You can see the individual’s story as you pass them and also donate, if you wish to. The beacon holder can then spend the money at select merchants or non-profits. To keep the beacon active, the holder must keep in touch with a counselor once a month. It was designed hoping that donors would be more willing to donate if they could be assured that their money would be used to help with food (for example) and not vices. (See also StreetChange in Philidelphia & HandUp mostly everywhere)

 

Conclusion

Homelessness census data–like most statistics–are complex and variable. However, policy makers and municipalities rely on this data to distribute funding and reassess their policies. It’s up to us to make our statistics as robust and meaningful as possible in order to support that work. Often, that means gathering different kinds and sources of data to create a mosaic-like picture of the situation, and experimenting with data gathering methodologies.

Further Reading

How to Evacuate the Homeless

How do you find and evacuate the homeless if you need to?

It’s a question that’s been bothering me–and many city leaders–for a while now. Homeless people are often the most vulnerable and the most disconnected from “normal” information channels like TV and radio which makes them a population more likely to be hardest hit by a disaster.

I did some research and talked to some people and here’s what I found:

  1. The homeless are not as disconnected as I originally thought.
  2. Solutions designed to target other, related, homelessness problems can be adapted for emergency use (a pretty standard procedure for cities and states faced with limited resources)
  3. The most vulnerable of society (homeless and otherwise) will–no matter what–be the hardest hit during a disaster. But, the more prepared individual citizens and businesses are to take care of themselves, the more resilient the community, the more help is available to the most vulnerable of society when it’s needed most.

Connections Exist

According to the Atlantic, 75% of homeless youths use social media compared to 90% of their age-matched compatriots. While it is yet one tiny study, it led one researcher to posit that the Digital Divide may not be as large as we thought. Especially since the FCC (Federal Communications Commission) has been working to narrow that divide since 1985 via it’s “Lifeline” program which subsidizes landlines and cell phones to low income consumers. (By the way, both Forbes and Snopes both debunked the myth that these phones are free or paid for by taxes.)

homelesssns
Picture courtesy of The Atlantic

Not only do some homeless have more access to cell phones and the internet (giving them a channel by which to receive evacuation notices) than I thought, but they are more socially connected than I imagined. This interview with Vacaville, CA Police Chief John Carl from the Armstrong and Getty Show shows how familiar the Police and other service providers get with homeless individuals. (I highly recommend a listen. It’s about 20 min long, but really interesting.) And some cities are working to make those social connections even stronger.

Connections build resiliency

Carli describes how his town created a “Homelessness Roundtable” to coordinate with private and public stakeholders/service providers. He also formed the “Community Response Unit”–a police unit designed to–among other things–get to know homeless individuals. CBS Sacramento has an interesting report on their successes.

 

blog-reducinghomelessnessinseattle-2017
Photo courtesy of Seattle Navigation Teams

Likewise, Seattle has formed “Navigation Teams,” a combination of police personnel and social workers who spend all day everyday on the city streets, getting to know the individuals in the camps and offering them housing or other services. They report that after the institution of these teams, that the acceptance rate of housing offers went from 5% to 30%.

Furthermore, this news report alludes to one of the other benefits of these teams which is relevant to my question. When an infant disappeared into the vast network of homeless camps, the Navigation Team were asked to help find her. Because of the knowledge and trust they had earned with their daily engagement, they were able to leverage the homeless network to find the child. This is the true power of these Community Response Units and Navigation Teams: they can be tapped to deliver disaster warnings to those that might otherwise miss it.

img_2548-version-2-400x240
Homeless camp on I-5 near the Capitol Hill neighborhood of Seattle. Some encampments are especially close to the dangers of traffic.

In fact, it has already happened on a small scale. I spoke to the Seattle Office of Emergency Management spokesperson who mentioned to me that shortly after the Navigation Teams had begun working, a tanker overturned on I-5. The police used the brand new Navigation Team maps of homeless encampments to evacuate the homeless in the area. (He didn’t tell me a specific date, but I think this is the news report.) Navigation Teams and Community Response Units are designed to help the problems surrounding homelessness, but they may be a crucial link when it comes to delivering disaster warnings. I’d love to see Navigation Teams in every city.

Finally,

I can’t help but notice a lesson buried here: when we work to make our communities safer and healthier, we make them more resilient as well. The homeless may be especially vulnerable, but–exactly like the rest of us–when they have more connections, they are more resilient.

Further Reading

Fewer Lightning-strike Deaths Good News for All

Found this on Twitter today from Bill Gates’ blog who got it from a new book called Enlightenment Now written by Steven Pinker:

You’re 37 times less likely to be killed by a bolt of lightning than you were at the turn of the century—and that’s not because there are fewer thunderstorms today. It’s because we have better weather prediction capabilities, improved safety education, and more people living in cities.

I love this statistic because it’s quirky, but also so elegantly illustrates what my job is about. Better meterology, geology, volcanology, sociology, and psychology science–it all makes this a safer world for everyone.

If you’re worried that the world is getting more and more violent all the time, take a look at Pinker’s books. They will illustrate that in many ways this Earth is getting less violent and more healthy. Good news like that is always welcome, am I right?

What statistic do you have that brings good news?

In the Land of Landslides

Officials hope to avoid another Niles-type Landslide 45 minutes south at Rattlesnake Ridge.

Niles Wa., 2009

In 2005, the State Department of Natural Resources (DNR) inspected a quarry in the southeast foothills of the Cascade Mountains in Washington State and told them that they were digging at the toe of a landslide. What’s a toe of a landslide? To borrow a metaphor from the History Link Files who tells this story: imagine you dump gravel down a flight of stairs. You start digging where the pile has stopped–the toe–a step or two before the ground. If you dig enough, more of the pile will slide down from above.

Essentially, geologically ages ago, the south side of Cleman mountain had slid into the Naches River Valley in a cataclysmic landslide 6 miles long creating the Sanford Pasteur Formation. Much later, still thousands of years ago, the Nachez river had undercut an edge of this formation and another massive landslide covered the river in hundreds of feet of debris. Over the many  years, the debris had gradually eroded away creating a hillside which present day homeowners had developed. (Click through the slide show to see it.)

This slideshow requires JavaScript.

Until in 2005, the DNR asked a quarry operating there to submit a plan for monitoring the slope as it was on landslide territory. In 2007, the quarry did it’s own investigation of the slope which concluded that the quarry operations were too small to be exasperating slope instability.

At 6 am, October 11, 2009, the slope gave way for a third time to the largest landslide in recorded Washington State history. As History Link describes:

It occurs near the small Yakima County community of Nile located in the eastern foothills of the Cascades about 10 miles northwest of Naches, near Cleman Mountain. State Route 410 travels west from Yakima through the Nile Valley and across the Cascades at Chinook Pass. The landslide lifts this road and breaks it into huge slabs of asphalt scattered every which way. It lifts the river, leaving rainbow trout dying among high rocks that used to be the riverbed.  Five homes are damaged or destroyed in the landslide and another 20 are damaged by flooding as the river finds its new way around rock and debris some 40 feet thick. The slide covers 80 acres, taking several power poles; as a precaution Pacific Power cuts service to about 800 customers. Residents of the sparsely populated area are evacuated. Factors causing the landslide are speculated to be the action of the Naches River undercutting the steep slope, the slippery geological situation of a layer of basalt sliding over a deeper layer of sand, and the activities of a gravel quarry engaged in undercutting the slope.

Thankfully, no one was injured, but the landslide permanently altered the course of the Naches river disrupting fisheries, flooding 20 homes, changing bridges and roads, and nearly destroying Yakima’s water treatment plant. ( Read more about the massive project to build a new road and a new river channel here. Cool diagrams and pictures.)

 

Rattlesnake Ridge, Wa., 2018

01042018_yakima_0914042-780x990

Almost exactly 8 years later, October 2017, and only 45 minutes south,  a crack was discovered on the top corner of a hill called Rattlesnake Ridge above a quarry near I 82. The quarry moved operations away from the slope and hired a geologist to monitor it.

Presently, the fissure is about 250 feet deep, though geologists believe that the basalt bedrock has not cracked (which is good news; only surface dirt will sluff off if a landslide occurs). The main fissure is growing at about 2.5 inches a day or about 1.5 feet per week, and gaining momentum. The Red Cross is calling it a slow landslide.

This drone footage courtesy of geologist Steven Mack (for the Yakima Herald) is from about a week ago. It gives very good establishing and close up shots of the slope.

gfx-2_1515538942989_10438003_ver1-0_640_360
courtesy of Kiro 7 with an excellent report here

Geologists believe that the landslide will continue to fall slowly south into the quarry and stabilize, though it is possible that a million cubic feet of dirt will fall southwest onto Thorpe road and parallel I-82–farther if the bedrock is indeed broken. The 70 or so residents nearby have been evacuated and the quarry owners are offering to pay for hotel accommodations. No one wants another Nile-type slide which trapped Nile residents for ten days while an emergency access road was being built.

Meanwhile, the most likely threat is anticipated to be rock falling on the road. To that end, the Department of Transportation (DOT) has placed a wall of shipping containers filled with cement barriers along the shoulder of I-82. It won’t save the road from a landslide, but it will help with falling rocks.

01042018_yakima_091404-780x520
courtesy of the Yakima herald

So this won’t be another Oso slide?

How did we get this far without discussing the Oso, Washington landslide in 2014 which killed 43 people? Because it’s a very different scenario from both the Nile landslide and the Rattlesnake Ridge (potential) landslide. From local news investigative report, Kiro 7:

After speaking with experts, Washington state leaders are confident that the Rattlesnake Ridge landslide is very different from the deadly Oso landslide that took 43 lives nearly three years ago.

Geologists explained to KIRO 7 that Oso was mud while Rattlesnake Ridge is consolidated rock on the move.

Also, the OSO slide was affected by rainfall. Water does not appear to be a factor in the Rattlesnake Ridge landslide.

“And (with) this one we have more time to prepare,” Public Lands Commissioner Hilary Franz told reporters. “And understand what’s going on and respond to it.”

Interestingly (as the report continues), the Rattlesnake Ridge fissure is visible in aerial photos as far back as the 1970s. Though it’s too soon to tell, geologists are speculating that the cause of the cracks may be similar to the Nile’s landslide situation: a “reactivation” of a much older landslide caused by gravel sitting on basalt sitting on sand being pushed by the Cascade faults.

We’re keeping an eye on this slow moving landslide for you. Stay tuned for developments.

Further Reading

Ambulance Tech

A year ago I wrote about some of the issues 911 operators face in getting help to you in a timely manner. I wanted to follow up and see if things had improved at all. Here’s what I found.

  1. Statistic-number crunching software is helping about half of America’s cities predict where the next emergency is likely to call from and are pre-positioning ambulances in those high-call areas. In Jersey City, over the last decade, it has cut response time in half.
  2. Various kinds of traffic signal preemption sy
    stems are available for emergency  vehicles and call centers. Some are based on sight, sound,  proximity, or by a master traffic control board at base. Essentially, they turn red lights green for the ambulance or clear the intersection by turning all lanes red.
  3. There are experimental programs in various cities designed to ease the burden of urgent but non-life-threatening calls. Some cities are trying a triage nurse service and vans for non-emergency medica
  4. l transport.

But there’s still problems with the system:

  1. According to this report, there are more medical emergencies than fire emergencies, but cities invest in fire trucks (for lots of different reasons). Which often means that the only vehicle available to respond to your heart attack is a ladder truck which may not be physically able to deliver you to the emergency room.
  2. According to the CDC, half of Americans are ditching the landline for cell phones only which can make finding you hard for these reasons:
    1. 911 services are working with outdated equipment and pinched funding.
    2. Satellites working with cell towers can take almost 3 minutes to find you
    3. Buildings can make it even harder for gps to find you.
    4. Apple and Google services like maps use wifi data to more accurately pinpoint your position and barometric readings to approximate your altitude. But even then, it can be hard to find the right apartment door.
    5. The cell carriers don’t want to pay to use Apple and Google’s mapping services. Although new regulations are compelling cell carriers to provide 911 services with better location information, the carriers are planning on requiring wifi providers and users to manually enter wifi data to build up their own, proprietary database according to the Wall Street Journal article. This will likely be incredibly inefficient to develop and hard to maintain.

So what might the future bring?

ambulance-2-master1050

I’m hoping it’ll bring self-driving ambulances. I understand about half of you, according to this survey disagree with me, but I think self-driving ambulances could be really cool. According to auto-piloting and sociology expert, Stephen Rice, self-driving ambulances might do a lot of good. Imagine:

  • Ambulances able to drive to staging areas while the crew takes a break.
  • A super-efficient vehicle able to plot a super-speedy route to the hospital.
  • One EMT performing CPR while another is able to call a doctor for an urgent prescription instead of having to drive.

This is still only a dream. For instance, self-driving cars are still having trouble driving aggressively enough. They sometimes get bullied by human drivers who know the automated car will back off to avoid a collision.in some areas. In 2009, Google’s car once got trapped at an intersection waiting for the other cars to come to a complete stop and let it go.  As Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles says:

They have to learn to be aggressive in the right amount, and the right amount depends on the culture.

But that was nearly eight years ago now. Already, Google’s test cars have driven nearly 1.1 million miles and are getting better and better at solving problems.

 

May you never have to ride in an ambulance; but if you do, may it be as quick as humanly–or robotically–as possible.

Further Reading:

Forthright Irma: A discussion of Hurricane Ambiguity.

It’s hard to get people to evacuate. Cara Cuite and Rebecca Morss–risk communication and hurricane experts–write about several factors that can cause people to ignore evacuation warnings. Things like: some people don’t like being told what to do; sometimes they judge fear-based messaging as “overblown” and disregard it; sometimes the cost and logistical nightmare of evacuating causes them to prefer to shelter-in-place. (Do read their article; it’s so interesting.)

But my favorite factor which causes people to ignore evacuation warnings is ambiguity. Ambiguity is systemic and unavoidable and–worse–humans are terrible at managing it. Some individuals and cultures are better at tolerating uncertainty than others (as Hofstede points out), but generally humans don’t like to take action when they can’t predict the outcome.

This trait can influence Emergency Managers’ work in two ways: 1. Storm prediction is inherently ambiguous which makes our jobs harder and 2. Ambiguity from authorities causes people to hesitate putting them in danger.

1. Storm prediction is ambiguous

Below is a comparison of American (blue) and European (red) computer models predicting the path of Hurricane Irma. The darkest lines are the averages. (Thanks to the Washington Post for this picture and many other excellent ones.) As you can see, there is a limit to how finely science can predict a hurricane’s progress. Imagine you’re the governor of Florida. Do you evacuate Jacksonville?

Irma prediction model
Group of simulations from American (blue) and European (red) computer models from Friday night [9/8/17]. Each color strand represents a different model simulation with slight tweaks to initial conditions. Note that the strands are clustered together where the forecast track is most confident but they diverge where the course of the storm is less certain. The bold red line is the average of all of the European model simulations, while the blue is the average of all the American model simulations.(StormVistaWxModels.com)
Fortunately for the real Governor of Florida, storm prediction has vastly improved since the deadliest storm in American history–“The Galveston Hurricane of 1900″ (This was before they started naming storms). During the 1900s, American meteorologists had a poor understanding of how storms played in the ocean. Though the more experienced Cuban meteorologists warned of an incoming hurricane, the message was ignored and no one evacuated. Surging waters killed 8,000 of the 37,789 residents or about 20% of the population.

After World War II, “the U.S. still used pretty simple forecasting tools. Airplanes took rough rides into these tempests, found the storm’s center, and then returned every six hours to find the center once again,” reports Popular Science. The U.S. launched it’s first weather satellite in 1960 and the first satellite images were broadcast on television in the 1970s.

The last decade or so has seen even greater improvements of predictions through better satellite technology and computer modeling. The Natural Hazards Review estimates that weather satellites have prevented up to 90% of the deaths that would have occurred had meteorologists not had satellites available. NOAA reports that their errors in storm tracking has dropped by 50% in the last 15 years while in the last 5 years, NOAA has improved it’s notice-giving by 12 hours. Public officials now have 36 hours of advance notice. If it hadn’t been for these improvements, weather experts estimate 10,000-20,000 people killed in Hurricane Katrina, instead of the actual 1,200 people. Because of storm tracking, only 15% of New Orleans’ population was still in the city.

The bad news is that there is still ambiguity to storm tracking–for instance, scientists still have a hard time judging the intensity of a storm. The good news is, the ambiguity is way less than it was before.

2. Ambiguity from authorities can cause inaction.

The ambiguity from storm prediction can creep into the language used by public leaders which directly causes people to hesitate to take action or to disregard warnings. Studies show that people use multiple sources of information when trying to make a decision and that people are more likely to take the action when a) sources agree and b) information is consistent over time.

Let’s compare the evacuation orders from Hurricane Harvey and Hurricane Irma. Evacuation messaging for Hurricane Irma was consistent and forceful and Florida evacuated smoothly. On the other hand, Texas officials have been criticized by some for their weak and inconsistent evacuation directions.

Before Hurricane Harvey, Houston Mayor Sylvester Turner told residents to shelter-in-place. Meanwhile, Texas Governor Greg Abbot said, “If you have the ability to evacuate and go someplace else for a little while, that would be good.” This mismatch in message caused many residents to stay put. In Mayor Turner’s defense, he was expecting flooding instead of high winds and driving in flooded streets is far more dangerous than staying in your house. The two public leaders judged the ambiguous weather data differently from their different vantage points.

ap_17239711478977_wide-911970ea2b58f92f1b30a213324a490678e43c90-s800-c85
Pic courtesy of NPR

Additionally, Gov. Abbot’s “evacuation order” seems weak. The “If you can…that would be good,” sounds like a suggestion on par with “If you could get me butter at the store, that would be good.” At first, I was frustrated because I assumed Gov. Abbot was just a bad public speaker. “Do you want people to evacuate or not?!” I yelled at the TV. (Please forgive me, Mr. Governor.) But after reflection, I think his message was ambiguous because it had to be. Here are the facts I imagine are in Abbott’s mind: 1. I want you to evacuate. 2. Evacuation causes traffic jams. We all remember the horror of the 2007 evacuation from Hurricane Rita–the largest evacuation on record. 3. Smart people are telling me that this could just be rain, in which case I don’t want millions of people flooded and drowning on the highways. 4. If I explicitly call for voluntary evacuation, people might evacuate from safe areas blocking the road for people trying to evacuate from dangerous areas. Poor Gov. Abbot. Not only is there ambiguity arising from the limitations of science and from different vantage points, but there is ambiguity in messaging because of conflicting motivations.

Fortunately, evacuations for Hurricane Irma went smoothly. We could make the argument, as Alan Bernstein, spokesperson for Houston Mayor Turner does, that this was due to Irma’s certainty. He said to NPR, “Irma is totally different. It is forecast for a direct hit on populous areas, bringing highly destructive winds and perhaps heavy coastal destruction. That was not the case here, and Mayor Turner would not second-guess an evacuation order for Florida.”

 

All I can say is: Thank God for better storm tracking.

Further Reading

Humans and Information: Managing a herd of cats.

I was talking to a project manager the other day about our common struggle: information systems management. We argued the various pros and cons of software designed to organize teams and information (’cause we’re nerds). As our conversation progressed over issues of human error and information tagging, I got to thinking about the similarities of maintaining knowledge and maintaining groups.

Group dynamics and information sharing are inextricably linked in a number of ways. On top of that–or maybe because of that–maintaining groups and information systems takes a lot of the same resources and methodology. Here are some links between the two:

Link #1: Group relationships (i.e. ‘cohesion’) effect how information is shared.

Rupert Brown says in Group Processes (2000) that the more tightly bonded group members feel to one another, the more likely they are to share information–whether it’s useful or not. However, tightly cohesive groups tend to be more isolated from the environment so their information can become repetitive or outdated. On the other hand, loosely associated groups tend not to share all information with everyone, but the information they do have tends to be unique and up to date because of the members’ contact with non-group members. Therefore, managing the groups dynamics directly influences the play of information throughout the group.

Side note: On the flip side of the coin, we might be able to say that information directly influences group relationships. If young adult drama-dies have taught me anything, it’s that a rumor can make or break a friendship.

Link #2: More people effect group maintenance. More information effects system maintenance.

Network theory (and common sense) states that the more nodes a network has, the more complex the network is. Just think about how your family’s dynamics changed after your sister married that guy. Adding a node (brother-in-law) to Thanksgiving dinner changed the dynamic of the network (family). Moreover, larger networks with more complex arrangements require more resources in order to maintain them. For information systems, this might mean more people or more time spent on data entry. For groups, it might mean more time and energy spent on group-building activities, managing rumors, or resolving conflict, etc.

Side note: I suspect that the larger and more complex the network, the more inertia it develops which is why very large groups (like governments) take so long to respond to changes in the environment (like a world-wide economic downturn).

Link #3: Information systems management is necessarily human systems management.

Collaboration not only requires that we share information, but that we share information in a way we can all recognize, access, and manipulate. Each team member must be trained in a standardized method for handling group information. We all have to use the same file-naming system, the same date system, the same tags. Furthermore, as we alluded in Link 2: a) the more types of information there is to be handled, the more complex the system for managing it becomes and b) the more people handling the information, the more complex the system for managing it becomes. Information management is directly dependent on group management.

This link seems to be where project managers and emergency managers spend so much of their time. It seems like we’re all struggling to get everyone else to manage information correctly.

Link #4: The person who manages group life also tends to manage information flow. 

It’s easier to spot in small, informal groups, but in every group there is a gatekeeper. A gatekeeper is a person who manages access to benefits which they do not own. For instance, access to the boss, a spot on the agenda, or access to illicit information (like rumors or secrets which they may trade for more political capital).

I, personally, like to think of the gatekeeper as someone who manages the group’s Transactive Memory System (TMS) which is a fancy sociological term for knowing who knows what. Usually, the gatekeeper is well connected in the group and–especially if they’ve been there a long time–generally knows who knows what. They are a valuable resource for members as they can direct them to those members with the expertise or connections they need. Gatekeepers control information flow in the group in a very direct way.

Here’s where I go out on a theoretical limb. We’ve experienced often how information gets stymied during a crisis and a lot of research (including my own) is focused around how to open the channels of communication. What if part of the problem of information flow is that the gatekeeper gets overwhelmed by requests literally or figuratively? What if they don’t prioritize information correctly and let something important drop? What if they can’t help the group efficiently because there’s too much noise (i.e. the opposite of data) to process requests for information/expertise?

So the solution might be to decentralize gatekeeping duties across the whole group (or at least more members), but then you run into the increased maintenance cost of more people handling information.  Hmm….

Conclusion: 

There seems to be an intimate association between handling group members and handling information. They take a lot of the same resources and processes and the cost in terms of time and effort of maintaining each seemed to be linked. Each individual may have a limit to how much information they can process at one time, necessitating collaboration with other individuals. But collaboration is inherently more costly as networks increase in complexity. Therefore, managers must cleverly walk the delicate line between too little and too much information, between too few and too many team members.

So if you ever feel like the system should behave better than it is, keep in mind: you’re actually managing two related, yet unique systems. Problem in one might be symptoms of problems in the other.