- Rapid, passive disposal in the unlikely event of a failed spacecraft
- Self-cleaning debris environment in general
- Reduced fuel requirements and thruster wear
- Benign ionizing radiation environment
- Fewer NGSO operators affected by the SpaceX constellation
The first two are because there is more atmospheric drag. I believe that orbital debris in the case of a collision was something that SpaceX was struggling to mitigate (every struggles, but no one has put up this many satellites before).
The third is because originally the plan was to launch to a 400km orbit and then have the satellites lift themselves to a 1150km orbit. Now they intend to launch to a 300-350km orbit and lift themselves to 550km. They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.
The fourth is apparently just "there's less radiation lower, and radiation is bad for electronics".
The fifth is just "less of the other theoretical internet constellations are at this height" as far as I can tell.
(All information sourced from the technical information attachment)
I'm curious: why will this provide a longer life? Is it the lift burn itself that affects the lifetime (so reduced lift burn is better for the satellite)?
If you want to keep a satellite in a mostly circular orbit from 350x350km to 600x600 km you do periodic very small boost maneuvers.
From my layman's understanding, because the effect of aerodynamic shaping breaks down at very low pressure, exhaust speed would have to be travel speed (TAS) x (total cross section / intake cross section) to keep orbit. Assessing whether that puts the concept in the realm of feasible technology or not is beyond my skills, but at least there seem to be projects working on that question. If it does work out, it would completely change the economics of LEO use.
Basically, there's already nothing left at their new planned altitude. In either scenario they'd need to burn fuel in order to compensate for irregularities in Earth's gravitational field, but I assume they've run the numbers internally.
The higher the orbit, the more fuel you need to deorbit.
Even over short distance the latency (which the "Legal Narrative" pdf quotes as 15ms) is negligible for almost all applications.
 paper http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf / video https://www.youtube.com/watch?v=AdKNCBrkZQ4
If we're talking about theoretical limitations then hollow core waveguide fibers should get you close to vacuum speed.
But 4,400+ satellites! How many launches is that going to take?
Also, glad to see that Alaska doesn't get shafted in this. As the computer voice noted, it's an FCC requirement.
Using a Falcon 9 at 25 satellites per launch it would take 177 flights, about 36 flights per year.
Using a Falcon Heavy with 40 satellites it would take 112 flights, over 5 years that's about 22 flights per year.
Using a BFR assuming 350 satellites per launch, until someone comes up with a better number, would need 13 flights total.
Only a little </s>.
Imagine a business plan for 4,400 satellites being sane. What a world.
In those 10 years they went from being literally laughed out of rooms, to being the forefront of the industry. And even now that they are arguably "on top" in many ways, they keep on trying to do these insane things.
They are currently trying to build a rocket with the largest launch capacity in history, launch and maintain a satellite network which is the largest in history (if successful, it won't just be the largest, but will be at or near half of all satellites in orbit!), and still have a goal of getting humans to mars.
Say what you will about Elon Musk, the guy knows how to set goals.
In my view, that's kind of the point. SpaceX internet service might make money, or might not- let's presume it loses a little bit of money over time. That's fine, because it's true purpose is to reduce costs for SpaceX the rocket manufacturer and refurbisher.
Manufacturing facilities operate best when they have an even load. Scaling up and down, laying off then hiring, etc, is bad for business. By having this perfectly flexible customer, SpaceX can do a lot less of that. They can scale up at an even pace.
The end goal will be SpaceX launching every X days exactly, always with a mix of external and internal customers on those launches.
if you follow the motion of the boxes.
"On March 29, 2018, the Commission authorized Space Exploration Holdings, LLC, a wholly owned subsidiary of Space Exploration Technologies Corp. (collectively, “SpaceX”), to construct, deploy, and operate a constellation of 4,425 non-geostationary orbit (“NGSO”)satellites using Ku- and Ka-band spectrum. With this application, SpaceX seeks to modify its license to reflect constellation design changes resulting from a rigorous, integrated, and iterative process that will accelerate the deployment of its satellites and services. Specifically, SpaceX proposes to relocate 1,584 satellites previously authorized to operate at an altitude of 1,150 km to an altitude of 550 km, and to make related changes to the operations of the satellites in this new lower shell of the constellation."
"Under the modification proposed herein, SpaceX would reduce the number of satellites and relocate the original shell of 1,600 satellites authorized to operate at 1,150 km to create a new lower shell of 1,584 satellites operating at 550 km"
This shell will also now use 24 orbital planes instead of an originally planned 32. (per a table in the technical information pdf).
The total number of satellites in the constellation goes from 4,425 to 4,409.
So, while it is a valid concern... until we put up “millions” of items, I think astronomers will be pretty safe. However, orbital debris avoidance... much bigger issue.
Edit: Clarification. The above is only true if having a satellite in the field of view is enough to be a problem.
That said, this project will approximately double the number of man made satellites. I believe these satellites will be smaller than normal, but also closer than normal.
I'm not an expert, but I think number of satellites is actually a really bad predictor for impact. In particular there was one set of satellites that had really bad effects for some reason. See more here .
They have not publicly stated anything about the inter-satellite links, but a research paper published by independent researchers(2) estimated that based on other state of the art, they get >100Gbps of bandwidth between two communicating satellites. The total trans-pacific capacity between NZ and the US will be better than the SC fiber line because there will be many different non-sharing paths using different satellites.
(1): Also likely for crossing connections. The paper mentioned below assumed use of laser for those too, but that is IMHO unlikely because steering would be too hard. Lasers work great for satellites on the same plane and those that are on the neighboring ones, because the angular speed that the system needs to move to track stays very low -- the satellites near them are almost stationary from their point of view. In contrast, satellites on crossing planes zip by very fast and have high angular motion, especially when near.
Those speeds may be great for individual endusers, but if you're a datacenter which needs a lot of bandwidth then starlink would be limited by the shared radio bandwidth to the sats, which could be quite crowded in an urban environment, while fiber wouldn't be. That's why I only see them as competing on latency, not on bandwidth.
Their plan to launch O(1000) satellites is to get lower latency and higher bandwidth, which would render the current generation of satellite internet obsolete.
It's a great example of the sort of business plan that's only possible with cheap launches that SpaceX's reusable rockets have provided.
Here's a more detailed primer:
Why not? Just saying "you are wrong" doesn't really add much to the conversation. I'm interested to know more.
The other issue with LEO is that if you want to double your satellite capacity, you need to launch twice as many than you currently have in the sky, instead of just one or two more large ones. This presents logistical problems, and technical as well to some extent.
When people typically refer to GEO satellites, they're unfairly making the assumption that the old crop of GEO satellites are where technology is today; namely fixed, low-capacity satellites. This is not the case. With the HTS (High-throughput satellites) and XTS (Extreme throughput satellites) that have movable capacity, not only do you have a comparable amount of usable capacity to LEO, but you can also move it as business needs change. The latency issue will never change, but if you see my other comments, I'm skeptical they'll be able to achieve the latency everyone is quoting.
order of...order of
yeah I know there is no "growth"
And I wasn't saying that "on the order of" is an approximation of what O() actually is in CS, merely that that is how OP used it.
LEO latency best case: 6ms
I've used those Hughes satellite connections before, I never got anything close to 250ms. More like 400ms.
Consumer grade hughesnet stuff will vary anywhere from 495ms in the middle of the night to 1100ms+ during peak periods due to oversubscription.
It would be technically possible, but uneconomical and an inefficient use of space segment transponder kHz to have customers in Wyoming moving traffic through a teleport in the Chicago area. Here's an illustration of Ka-band spot beams on a typical state of the art geostationary satellite:
Applying the same concept to starlink, telesat's proposed system, and oneweb, if they build a number of teleports geographically distributed near rural areas, it will allow individual satellites to serve as bent-pipe architecture from CPE --> Teleport, within the same moving LEO spot beams, or to have customer traffic take only one hop through space to an adjacent satellite before it hits the trunk link to an earth station. For example customers in a really rural area of north Idaho along US95 might "see" a set of moving satellites that also have visibility to an earth station in Lewiston, ID, where carrier grade terrestrial fiber links are available. Or a customer in a remote mountainous area of eastern Oregon may uplink/downlink through a teleport in Bend.
The ultimate capacity of the system will be determined by how few hops through space they can get the traffic to do. Since every satellite will be identical and capable of forming a trunk link to a starlink-operated earth station, when it's overhead of it, they have an incentive to build a large number of earth stations geographically distributed around the world.
It's basically the same idea as o3b's architecture but at a much smaller scale.
Another consideration: adding another 50ms to GEO latency isn't really going to change anyone's opinion. It's still targeted towards streaming, and latency doesn't matter as much since they're not targeting real-time gamers. SpaceX needs the latency to be very low to hit that market. There's a world of difference going from a 30ms ping to an 80ms ping, and once you're past a certain point, it puts you in the same camp as GEO.
Light travels about 1800km in 6ms, but that's just one way. Straight up and straight down at 550km is 3.6ms.
Maybe using the ones in lower orbit to cover more densely populated locations?