Hacker News new | comments | show | ask | jobs | submitlogin
Application for Fixed Satellite Service by SpaceX (fcc.report)
91 points by lgats 4 months ago | hide | past | web | 89 comments | favorite

The application lists the following benefits

- Rapid, passive disposal in the unlikely event of a failed spacecraft

- Self-cleaning debris environment in general

- Reduced fuel requirements and thruster wear

- Benign ionizing radiation environment

- Fewer NGSO operators affected by the SpaceX constellation

The first two are because there is more atmospheric drag. I believe that orbital debris in the case of a collision was something that SpaceX was struggling to mitigate (every struggles, but no one has put up this many satellites before).

The third is because originally the plan was to launch to a 400km orbit and then have the satellites lift themselves to a 1150km orbit. Now they intend to launch to a 300-350km orbit and lift themselves to 550km. They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.

The fourth is apparently just "there's less radiation lower, and radiation is bad for electronics".

The fifth is just "less of the other theoretical internet constellations are at this height" as far as I can tell.

(All information sourced from the technical information attachment)

> They expect that the smaller amount of lifting will increase satellite lifetime by 50% even after accounting for atmospheric drag.

I'm curious: why will this provide a longer life? Is it the lift burn itself that affects the lifetime (so reduced lift burn is better for the satellite)?

You can use fuel for station-keeping instead of orbit lifting.

Wouldn't burning all station-keeping fuel immediately be better than riding through thicker atmosphere for a while, and burning fuel bit by bit?

no, because a modern satellite with ion thruster or very high efficiency (high specific impulse, low thrust measurement in newtons), once it's been ejected from the second stage of a rocket and is above 99.99% of the atmosphere, if you burned all of its stored xenon fuel immediately after launch, would end up in a 45,000 x 450 km elliptical orbit.

If you want to keep a satellite in a mostly circular orbit from 350x350km to 600x600 km you do periodic very small boost maneuvers.

On an unfortunately very theoretical (but apparently already patend-encumbered, from what I can glean from Wikipedia) level there is also the concept of air-breathing electrical propulsion, an ion engine replenished from the the same trace atmosphere that causes the drag the engine is supposed to counter. Basically a solid state propeller that can still work in very thin air.

From my layman's understanding, because the effect of aerodynamic shaping breaks down at very low pressure, exhaust speed would have to be travel speed (TAS) x (total cross section / intake cross section) to keep orbit. Assessing whether that puts the concept in the realm of feasible technology or not is beyond my skills, but at least there seem to be projects working on that question. If it does work out, it would completely change the economics of LEO use.

I'm having a hard time finding good atmospheric density charts that go all the way to orbital altitudes, but this one looks good: http://hildaandtrojanasteroids.net/Atmosphere_model.png

Basically, there's already nothing left at their new planned altitude. In either scenario they'd need to burn fuel in order to compensate for irregularities in Earth's gravitational field, but I assume they've run the numbers internally.

Station keeping doesn't only involve drag; it involves rebalancing the orbital plane when a satellite fails; you also need to provision fuel to deorbit.

The higher the orbit, the more fuel you need to deorbit.

Ah! Thanks.

Curious that they don't mention lower communications latency as a benefit.

Perhaps because it's only 2ms per direction. I assume that someone that sensitive to latency wouldn't use satellite communications in the first place.

This constellation should have lower latency over long distance than is even theoretically possible using terrestrial fiber, because the speed of light in space is substantially better than the speed of light in fiber. [0]

Even over short distance the latency (which the "Legal Narrative" pdf quotes as 15ms) is negligible for almost all applications.

[0] paper http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf / video https://www.youtube.com/watch?v=AdKNCBrkZQ4

> than is even theoretically possible using terrestrial fiber,

If we're talking about theoretical limitations then hollow core waveguide fibers should get you close to vacuum speed.

They did elsewhere, just not in their list of benefits. Not sure why it wasn't included there.

This animation based on this proposal is good, https://m.youtube.com/watch?feature=youtu.be&v=AdKNCBrkZQ4

That animation was great. Just what I needed to wrap my brain around this.

But 4,400+ satellites! How many launches is that going to take?

Also, glad to see that Alaska doesn't get shafted in this. As the computer voice noted, it's an FCC requirement.

good question so I found this[1] faq which says:

  Using a Falcon 9 at 25 satellites per launch it would take 177 flights, about 36 flights per year.
  Using a Falcon Heavy with 40 satellites it would take 112 flights, over 5 years that's about 22 flights per year.
  Using a BFR assuming 350 satellites per launch, until someone comes up with a better number, would need 13 flights total.
now those are based on the old higher orbit so presumably the numbers move substantially with this new plan.


Ehh, give ‘em a few more years, they can do it with one rocket over the course of a few days.

Only a little </s>.

Imagine a business plan for 4,400 satellites being sane. What a world.

Not only that, but the companies first successful orbital launch was only 10 years ago!

In those 10 years they went from being literally laughed out of rooms, to being the forefront of the industry. And even now that they are arguably "on top" in many ways, they keep on trying to do these insane things.

They are currently trying to build a rocket with the largest launch capacity in history, launch and maintain a satellite network which is the largest in history (if successful, it won't just be the largest, but will be at or near half of all satellites in orbit!), and still have a goal of getting humans to mars.

Say what you will about Elon Musk, the guy knows how to set goals.

> how many launches is that going to take?

In my view, that's kind of the point. SpaceX internet service might make money, or might not- let's presume it loses a little bit of money over time. That's fine, because it's true purpose is to reduce costs for SpaceX the rocket manufacturer and refurbisher.

Manufacturing facilities operate best when they have an even load. Scaling up and down, laying off then hiring, etc, is bad for business. By having this perfectly flexible customer, SpaceX can do a lot less of that. They can scale up at an even pace.

The end goal will be SpaceX launching every X days exactly, always with a mix of external and internal customers on those launches.

wait, how are the satellites orbiting in planes that don't pass thru the center of the earth?

I think it just looks that way from the way it's rendered: the continuous line at the northern limit isn't an orbit, but formed from the northernmost segment of many.

no, not that, but it looks like the satellites are orbiting in circles in higher latitude-planes.

if you follow the motion of the boxes.

I love this animation, although it may need to be slightly revised with this new FCC application.

Not sure where you got 116. Looks like the number is the same but they're relocating some.

"On March 29, 2018, the Commission authorized Space Exploration Holdings, LLC, a wholly owned subsidiary of Space Exploration Technologies Corp. (collectively, “SpaceX”), to construct, deploy, and operate a constellation of 4,425 non-geostationary orbit (“NGSO”)satellites using Ku- and Ka-band spectrum. With this application, SpaceX seeks to modify its license to reflect constellation design changes resulting from a rigorous, integrated, and iterative process that will accelerate the deployment of its satellites and services. Specifically, SpaceX proposes to relocate 1,584 satellites previously authorized to operate at an altitude of 1,150 km to an altitude of 550 km, and to make related changes to the operations of the satellites in this new lower shell of the constellation."

I think this quote better summarizes the change.

"Under the modification proposed herein, SpaceX would reduce the number of satellites and relocate the original shell of 1,600 satellites authorized to operate at 1,150 km to create a new lower shell of 1,584 satellites operating at 550 km"

This shell will also now use 24 orbital planes instead of an originally planned 32. (per a table in the technical information pdf).

The total number of satellites in the constellation goes from 4,425 to 4,409.

Still can't quite wrap my mind around the fact that the world's largest space/satellite programme is about to be run by an LLC.

We've reverted the title from the submitted “SpaceX Files to Introduce Starlink with just 116 Satellites 550 km”.

I wonder if there is any consideration for high quantities of LEO satellites affecting ground based telescope operations? I guess one question would be "how many LEO objects are up there that are the size of say more than 1 cubic meter in size right now?".. if SpaceX would be doubling the total number up there right now that does sound concerning.. I personally love the idea of the global internet constellation they are working on.. just worry that there could be other ramifications..

I think something that most people don’t comprehend is just how HUGE space is. For context, at 550km orbit, the “surface area” of the sphere surrounding the earth is ~601 BILLION square kilometers. Suppose we had 40,000 (double the number of objects larger than a softball we have been tracking) objects up there, in that exact orbit, we would still only have one object every 15 MILLION square miles.

So, while it is a valid concern... until we put up “millions” of items, I think astronomers will be pretty safe. However, orbital debris avoidance... much bigger issue.

I agree with your conclusions. Just a nit: the r² increase in surface of a sphere that makes this so much bigger an area to scatter the satellites across also applies to the field of view of telescopes and thus cancels out.

Edit: Clarification. The above is only true if having a satellite in the field of view is enough to be a problem.

In my opinion even if this was a huge problem for ground base telescopes it wouldn't matter, that negative externality would be tiny compared to the potential benefits.

That said, this project will approximately double the number of man made satellites. I believe these satellites will be smaller than normal, but also closer than normal.

I'm not an expert, but I think number of satellites is actually a really bad predictor for impact. In particular there was one set of satellites that had really bad effects for some reason. See more here [0].

[0] https://en.wikipedia.org/wiki/Satellite_flare

Yea I wonder if with modern telescopes that they do digital integration so effectively they just delete the errant frames from the recorded captures and carry on from there. In the old days with film capture the entire exposure would be toast if a satellite flew through the frame..

Are SpaceX going to put a bunch of their previous satellite operator customers out of business?

Not just that, they will even compete on latency with transoceanic fiber operators.

Also bandwidth. Because there is little other use for the satellites over the pacific, the instant the first phase of the constellation is up, it is better than the Southern Cross fiber line at everything it does.

I'm a bit skeptical about that. Fiber has mind-boggling bandwidth scaling compared to beaming radio signals through a hundred kilometers of atmosphere. Light has a higher bandwidth and you can put more than one fiber into a cable and there's less interference too.

Their satellites don't communicate with each other by radio links, but by laser. Laser works better in vacuum than it does in fiber. Radio is just for the downlinks.(1)

They have not publicly stated anything about the inter-satellite links, but a research paper published by independent researchers(2) estimated that based on other state of the art, they get >100Gbps of bandwidth between two communicating satellites. The total trans-pacific capacity between NZ and the US will be better than the SC fiber line because there will be many different non-sharing paths using different satellites.

(1): Also likely for crossing connections. The paper mentioned below assumed use of laser for those too, but that is IMHO unlikely because steering would be too hard. Lasers work great for satellites on the same plane and those that are on the neighboring ones, because the angular speed that the system needs to move to track stays very low -- the satellites near them are almost stationary from their point of view. In contrast, satellites on crossing planes zip by very fast and have high angular motion, especially when near.

(2): http://nrg.cs.ucl.ac.uk/mjh/starlink-draft.pdf

The up/downlink is the bottleneck if you want to provide backbone-like connectivity over satellites, and that's still done via radio.

Those speeds may be great for individual endusers, but if you're a datacenter which needs a lot of bandwidth then starlink would be limited by the shared radio bandwidth to the sats, which could be quite crowded in an urban environment, while fiber wouldn't be. That's why I only see them as competing on latency, not on bandwidth.

10.7GHz... right on the edge of a radio astronomy protected band. Having a downlink here is not very good...

how can they prevent these satellites being hacked? If they were they would be hard to recover I’d guess?

How do you prevent any communication satellite from being hacked? What makes you think that those satellites are easier to hack than others?

Do you know why are they doing this? Lower orbit is probably cheaper?

Low orbit is cheaper to get a satellite to just considering launch costs (less fuel, easier to recover the first stage), but as siblings have mentioned, it's more expensive overall, particularly considering that you need lots more satellites to provide coverage.

Their plan to launch O(1000) satellites is to get lower latency and higher bandwidth, which would render the current generation of satellite internet obsolete.

It's a great example of the sort of business plan that's only possible with cheap launches that SpaceX's reusable rockets have provided.

Here's a more detailed primer:


No, it would not render the current generation obsolete. Not only that, but they are years away from having a fleet up there, and a decade from having the full fleet. Satellite technology progresses a lot in a decade, so you're doing the equivalent of comparing the iPhone 12 to the Galaxy s2.

> No, it would not render the current generation obsolete.

Why not? Just saying "you are wrong" doesn't really add much to the conversation. I'm interested to know more.

You're right, I should have explained better. If you take a look at many of the other satellites announced that don't get the same hype as SpaceX, you'll notice that they're comparable capacity, or have lower advertised capacity with other tradeoffs. SES mPOWER, viasat-3, and Jupiter 3 to name a few. The companies that stick to GEO satellites maintain that LEO satellites waste a lot of capacity over water, and there's nothing you can do about that. Not only water, but underserved areas that can't afford satellite technology are also included in this. Depending on the numbers you look at, this could mean the effective capacity of the satellite is ~10% of what is advertised.

The other issue with LEO is that if you want to double your satellite capacity, you need to launch twice as many than you currently have in the sky, instead of just one or two more large ones. This presents logistical problems, and technical as well to some extent.

When people typically refer to GEO satellites, they're unfairly making the assumption that the old crop of GEO satellites are where technology is today; namely fixed, low-capacity satellites. This is not the case. With the HTS (High-throughput satellites) and XTS (Extreme throughput satellites) that have movable capacity, not only do you have a comparable amount of usable capacity to LEO, but you can also move it as business needs change. The latency issue will never change, but if you see my other comments, I'm skeptical they'll be able to achieve the latency everyone is quoting.

Very interesting, thanks for sharing.

What is O(1000) satellites?

It's an unnecessary misuse of the Big O notation used to establish bounds of functions. I think they just meant up to 1000 satellites

The author means literally 1000 satellites.

It's borrowed from comp sci, but you can read O(x) as "on the order of x", so maybe 900, maybe 1200, but somewhere around there.

In Comp Sci O() notation has a very specific meaning and "on the order of" does not approximate it. I think it was probably just a misuse of the notation.

In informal contexts it's also used as a fudge/handwave, purely as a questionable analogy that nobody is expected to take too seriously. `theptip` almost certainly knows that O(1000)=O(1), they were just being playful.

I have seen it getting used in many places and it's understood as order of. O(n) means at worst case number of steps needed to complete an algorithm would be c.n + a. So it is completely valid use since O(1000) would mean it can be c.1000 + a, which actually means order of 1000s.

"The letter O is used because the growth rate of a function is also referred to as the order of the function"

order of...order of

yeah I know there is no "growth"

I guarantee you can find dozens of examples of CS-type people using O() in non-algorithmic-complexity topics as "on the order of" or "approximately" right here on HN within the past 6 months. They aren't all misusing that notation - it is being co-opted into a more general lexicon, whether you like it or not.

And I wasn't saying that "on the order of" is an approximation of what O() actually is in CS, merely that that is how OP used it.

Geosynchronous satellite latency in best case: 250ms

LEO latency best case: 6ms

I've used those Hughes satellite connections before, I never got anything close to 250ms. More like 400ms.

The absolute minimum latency you'll ever see with geostationary is about 489ms. That's assuming 1:1 dedicated transponder capacity and something like a higher end SCPC terminal and modem, accounting for latency and modulation/coding/FEC by the satellite modems on both ends.

Consumer grade hughesnet stuff will vary anywhere from 495ms in the middle of the night to 1100ms+ during peak periods due to oversubscription.

This should probably tell you something about spacex's claims as well. The actual latency is never just the slant range. There's a ton of processing and network switching too.

I am pretty optimistic about SpaceX's claims for what the space-segment latency will be. If you look at the system architecture for current generation high-bandwidth Ka-band geostationary services, which has dozens of spot beams on North America, there's about 30 teleports spread out around the US and Canada. These allow Viasat and Hughesnet customers, and similar, to consume capacity in the same spot beam as the teleport they're uplinked from (vs the satellite cross-linking a set of kHz from one Ka-band spot beam to another). For example, customers in really rural areas of Wyoming are going to connect to a teleport that's in Cheyenne, which will usually be in the same spot beam. Sites in Cheyenne near the railway have really good terrestrial fiber capacity for an earth station operator to buy N x 10 or 100GbE L2 transport links to the nearest major city.

It would be technically possible, but uneconomical and an inefficient use of space segment transponder kHz to have customers in Wyoming moving traffic through a teleport in the Chicago area. Here's an illustration of Ka-band spot beams on a typical state of the art geostationary satellite:


Applying the same concept to starlink, telesat's proposed system, and oneweb, if they build a number of teleports geographically distributed near rural areas, it will allow individual satellites to serve as bent-pipe architecture from CPE --> Teleport, within the same moving LEO spot beams, or to have customer traffic take only one hop through space to an adjacent satellite before it hits the trunk link to an earth station. For example customers in a really rural area of north Idaho along US95 might "see" a set of moving satellites that also have visibility to an earth station in Lewiston, ID, where carrier grade terrestrial fiber links are available. Or a customer in a remote mountainous area of eastern Oregon may uplink/downlink through a teleport in Bend.

The ultimate capacity of the system will be determined by how few hops through space they can get the traffic to do. Since every satellite will be identical and capable of forming a trunk link to a starlink-operated earth station, when it's overhead of it, they have an incentive to build a large number of earth stations geographically distributed around the world.

It's basically the same idea as o3b's architecture but at a much smaller scale.

I don't doubt the latency in space numbers. What I don't believe is using a theoretical distance to compute latency. As you said, a LOT of that latency can come from scheduling inefficiencies and congestion. Each of their satellites has a relatively small amount of bandwidth, so if you happen to be in a beam with a lot of people, you'll be hit hard by this. As far as I know, their satellites are not capable of steering beams, and rely purely on the placement directly down from where they are.

Another consideration: adding another 50ms to GEO latency isn't really going to change anyone's opinion. It's still targeted towards streaming, and latency doesn't matter as much since they're not targeting real-time gamers. SpaceX needs the latency to be very low to hit that market. There's a world of difference going from a 30ms ping to an 80ms ping, and once you're past a certain point, it puts you in the same camp as GEO.

> LEO latency best case: 6ms

Light travels about 1800km in 6ms, but that's just one way. Straight up and straight down at 550km is 3.6ms.

light is slower in the atmosphere, did you adjust for that?

Speed of light in air is still ~0.9997c (and approaches 1 as altitude increases), so it's a much less noticeable difference than it is for copper or glass

Cheaper how? I would assume with a lower orbit they experience more drag and deorbit faster meaning they need to be replaced more often.

Maybe using the ones in lower orbit to cover more densely populated locations?

The original orbits would take ~100 years to decay, which is way longer than the life of the satellite. The new orbit is around 5 years (not taking into account maneuvers from the fuel on board). Makes a lot of sense, especially since it looks like they'll be iterating the design as they go (two that jumped out at me: initially not all satellites will be taking advantage of both Ku and Ka bands, and not all satellites will have phased array antennas)

That's really interesting, than you for sharing.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact