Did sighbit measure the situation awareness of lifeguards with and without their solution? Are they sure lifeguards will not rely on the AI, potentially miss obvious distressed swimmers, and be useless when the system completely fail?
Will the lifeguards who always used this system will be trained and not panic when the system will fail, for example because of a power outage?
However, I see a big risk in how this could cause a trend of less qualified (cheaper) lifeguards, hired by people who might feel more pressure regarding budgets than actual safety (probably lands mostly on the "lifeguards" neck anyways).
Considering how everything these days is run with a (often bad) cost/benefit analysis, I'm everything but positive about the "unforeseen" side effects of AI in this field.
As difficult as it is to spot a single drowning person in a sea of people (no pun intended), it might in fact be partly the feeling of sole responsibility that keeps (good) lifeguards as alert as they often are. I'm not so sure that AI will have a positive sum effect on that.
A lifeguard is currently already a minimum-wage job. Also there's a ton of seasonal workers who aren't exactly that qualified to begin with.
What do ATCs require AI for, anyway? At least as a failsafe against operators issuing catastrophic orders or to catch pilots not following orders (or veering off course), spatial awareness and collision checks is basic geometry only, and the system can be automatically fed with radar and AIS data.
> Are they sure lifeguards will not rely on the AI, potentially miss obvious distressed swimmers, and be useless when the system completely fail?
That, plus it has been shown multiple times that AIs trained on majority White normal-weight Caucasian datasets will have issues with people of color, Asian people or people over/underweight. And given that many issues regarding discrimination in AI only pop up way after release, this scares me.
Lifeguards in that situation can "fail safe" by ordering everybody out of the water.
If these systems are coupled with the right sort of training, they might be a net benefit. Or maybe the system could be designed in such a way that requires the lifeguard to stay attentive, such as requiring the lifeguard to input the current headcount. If the lifeguard's headcount starts to disagree with the computer's, that could be a signal that the lifeguard has become fatigued and needs to call in another lifeguard or call people out of the pool. (If the system isn't accurate enough to be used in this way, then perhaps it's not ready for use at all.)
> If these systems are coupled with the right sort of training, they might be a net benefit. Or maybe the system could be designed in such a way that requires the lifeguard to stay attentive, such as requiring the lifeguard to input the current headcount.
I too worked as a lifeguard and I think only a really bad implementation can have a counter-productive effect. Spontaneously, I can think of sunglasses with AR-overlay and a feedback loop:
* mark all people in field of view, color coded
* let lifeguard acknowledge/ignore problems
* allow for "problem"-handover to next post (e.g. if busy or if it is a swimmer in a current)
What definitely does not help is to have lifeguards behind monitors because they'd miss out on 99% of the real daily "action": dealing with littering, violence, ordinance violations, answering questions, pointing new arrivals towards safe zones....
That all assumes you've solved the problem of training a good enough detector (presumably from a similar dataset) which has its own difficulties, but OP's question made me wonder about the aspect i described above.
- Do people know they're under surveillance?
- What will Sightbit do when issued a warrant for all that video and data?
There is a great website where you can try your spotting skills yourself .