• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Stray Beam Weapons fire

I doubt that would work that way given it's pure Particle Energy of some form and not a Electronic Device.

Torpedoes & Missiles of that sort would definitely have those kinds of things programmed in.

But like a Bullet, pure Particle Beams aren't Electronic Devices that you can program.

So once you send it, especially in the vacuum of space, it just goes until it hits something or spreads out & dissipates enough to where it wouldn't be harmful.

That region could be quite large, but given how vast space is, it shouldn't be a problem if you use common sense and engage your FTL drive to avoid any Combat Zones.

If you need to travel into the Combat Zone for any reason, that could be a issue that the writers deal with since that could be a "Plot Point" from a civilian PoV (Point of View) in their civilian vessel(s).

Since phasers and torpedoes have 'ranges' - we know the maximum effective range of phasers is 1 light second (aka, 300,000 km).

Torpedoes have a range of 3 million km.

To me this heavily implies that phasers past 300,000 km simply 'dissipate' harmlessly... and torpedoes just run out of fuel or self-destruct (I think the latter was implied - for safety reasons).

That's pretty much it.
I would imagine its the same thing for every other species - past effective ranges, the weapons are eminently useless?
 
Since phasers and torpedoes have 'ranges' - we know the maximum effective range of phasers is 1 light second (aka, 300,000 km).
I believe that's a (SuperLuminal / FTL) Active Targeting Sensor limitation where you send out your sensor info and get info back along with predicting where the target will be & firing the weapon with a reasonably high expectation to hit % ratio.
It's not necessarilly a physical limit of the DEW Particle emissions. Remember, in the vacuum of space, once you send something, it goes on "Forever".
It's not like a 'Video Game' where the data gets deleted to save memory when it's out of range.
Once a object is in motion, it just keeps on going until it impacts something & reacts.
The Damage from a DEW weapon will lower with intensity as it diverges over time & distance, but how much will it be to be effectively "Non-lethal" depends on the original power level you're firing at.

Torpedoes have a range of 3 million km.
I think that's more of a "Fuel Limit" at STL given that it uses the Deuterium it has on-board for it's tiny impulse thrusters.
The Torpedoes that StarFleet uses are DRAMATICALLY INFERIOR in comparison to what the Druoda created with the Series 5 Long-Range Tactical Armor.

StarFleet's "Type-6 Photon Torpedo" that is the mass produced Bread & Butter basic Mass Produced Photon Torpedo is what StarFleet uses.
The Type-6 was used by the Intrepid-class & ProtoStar-class StarShips as their basic Photon Torpedo.

Obivously there were other Torpedoes like the Class 9 & Class 10 Photon Torpedoes that were used by Voyager, but they weren't the vast majority of their stockpile of Torpedoes, they were specialized torpedoes.
Same with Tri-Cobalts, Quantum, & TransPhasic Torpedoes. Those are more rare in terms of inventory availability than the bog standard Photon Torpedo.

The Druoda's Series 5 Long-Range Tactical Armor has:
- Its explosive yield consisted of a highly focused antimatter explosion with a blast radius of 200 kilometers.
- A Blast Radius of 200 km, using a Blast Wave Effects Calculator, the WarHead would need to be in the BallPark of 100 Gt when detonating upon impacting the ground.
Groundburst
Peak overpressure: 20 psi
- Distance from the explosion site: 96.0 Kilometers
- Damage and injuries:Heavily built concrete buildings are severely damaged or demolished

Peak overpressure: 10 psi
- Distance from the explosion site: 135.3 Kilometers
- Damage and injuries:Reinforced concrete buildings are severely damaged or demolished. Most people are killed.

Peak overpressure: _5 psi
- Distance from the explosion site: 200.8 Kilometers
- Damage and injuries:Most buildings collapse. Injuries are universal, fatalities are widespread.

Peak overpressure: _3 psi
- Distance from the explosion site: 279.4 Kilometers
- Damage and injuries: Residential structures collapse. Serious injuries are common, fatalities may occur.

Peak overpressure: _1 psi
- Distance from the explosion site: 619.9 Kilometers
- Damage and injuries: Window glass shatters Light injuries from fragments occur.

StarFleet's 1.5 kg of M/A-M is only 64.44 MegaTons of destructive force, while it has a 300 km listed blast radius, that's most likely measured in space then on Planetary Impact.
StarFleet's main objectives with it's targets are primarily targets in space. StarFleet's Torpedo's aren't primarily designed to target Surface Emplacements Planet-side.
It can do that, but it's not the main objective. StarFleet Torpedoes are designed to be "Cheap Resource-wise" to mass produce & to do Good-Enough Damage against Space-Based targets.

Druoda's Projectiles seem to be designed for Planetary Bombardment (Basically Wipe Out Enemy Cities), especially given the nature that we found it embedded on the surface of a Planet near a Nuclear-Like Blast Crater.
So that's WAY INFERIOR to the 100 GigaTon range of Explosive Power that the Druoda were packing in it's WarHead.

- The armor unit was powered by a condensed energy matrix, which alone could power a fleet of starships.
That's basically the Ultimate in Energy Dense Battery given what all this Torpedo can do.

- Instead of a standard computer core, there was an on-board class-11 artificial intelligence which used bio-neural circuitry to mimic humanoid synaptic functions.
The intelligence was programmed to take every measure necessary to ensure that it reached its target.
When separated from its explosive components and reprogrammed, the intelligence core could be used for planetary weather control or even terraforming purposes.

Given how much advanced Computing Power the WarHead has, it's incredible. That's Modern SuperComputer Cluster level of capability in the tiny form factor of a Advanced Torpedo Computer.

- The weapon was warp-capable, had a maximum range of 80 light years, and was protected by paratrinic shielding.
StarFleet Torpedoes only have "Warp Coasting Capability", & a limited STL range of 3M km.
StarFleet Torpedoes also lacks any significant form of "Shields" to defend itself.

- Its sensors were able to detect both its position and when its systems were being tampered with.
The torpedo was programmed with a targeting threshold of two light years meaning that, when it was within two light years of its target, it could not disarm or divert – not even with the correct command codes.
However, the detonation sequence could be stopped by routing an electromagnetic pulse through its power matrix.

I doubt StarFleet Torpedoes are even remotely that good at "Anti-Tampering"

- Also StarFleet Torpedos have dimensions of:
Body/Casing Dimensions: LxWxH = (210 × 76 × 45) cm
Weight: 247.5 kilos when not loaded (545.644 pounds)

The Druoda's Torpedo was easily carry-able by 2x Lower Deckers underneath their arms.
The weight wasn't nearly what the StarFleet Torpedo was given how easily it was lifted by 2x StarFleet Lower Deck Officers underneath their arms.

In every measurable metric, the Druoda has a "Superior Torpedo" in every sense of the word.
Something Voyager obviously scanned and the info will be brought back to the Weapon Nerds within StarFleet & used to improve their torpedoes sometime in the future.


To me this heavily implies that phasers past 300,000 km simply 'dissipate' harmlessly... and torpedoes just run out of fuel or self-destruct (I think the latter was implied - for safety reasons).
Like Lasers, DEW that are focused like a Laser would also eventually diverge as it traveled further.
But how "Harmless" they would be would depend on Energy Density/Intensity over the area that the Beam is spreading to.
But if your base Energy Density/Intensity is REALLY large to begin with, that could be problematic since the danger zone would be quite wide if you missed.

The Torpedoes can easily be programmed to "Self Destruct" if they are about to run out of fuel or hit a pre-programmed threshold.
So I'm less worried about those.


That's pretty much it.
I would imagine its the same thing for every other species - past effective ranges, the weapons are eminently useless?
It's not really "Useless", remember it depends on what you're targeting & how manueverable they are.

What kind of "Effective Range" you get depends on what you're targeting and that'll vary by different enemy types:
- (≤ _1 Light-Hour) Indiscriminate Bombardment against a Massive Celestial Object on a predictable path
- (≤ _1 Light-Minute) Planetary Bombardment (Assuming you're picking a fixed emplacement on the surface), this obviously depends on how fast the planet is moving & rotating
- (≤ 10 Light-Seconds) against A Orbiting StarBase w/ 'Fixed path traversal' & 'Limited Altitude Adjustability' makes them horrible at dodging
- (≤ _5 Light-Seconds) against A "StarShip Class" vessel that is significantly large & Heavy Vessel, there is only so much manuverability you can expect from them.
Obviously Larger Vessels gives you more lattitude, while Smaller Vessels will give you less time available to reasonably hit the target.
- (≤ ½ Light-Second) against Shuttles & Smaller vessels.
- (≤ ¼ Light-Second) against manueverable Space-Fighter type Space-Craft
- (≤ ⅛ Light-Second) against ridiculously manueverable Gundam Style Bits / Small Attack Drones.

Each one of those would require a ever closer / smaller attack radius with less travel time to be effective due to how increasingly manueverable they are, and alot of that is based on time to travel for the Energy Projectile since it's capped at the Speed of Light if it's a Direct Laser like Beam like it's stated in the ST:TNG Technical Manual.

Obviously the range would be shorter if you have energy bolts that travel at a smaller fraction of the Speed of Light.

Don't forget that the Vessel's AI/Computer can have pre-programmed Auto-Dodge to help with defense for janking against incoming Beam Projectiles.

Or the Pilot could be paying attention and dodging incoming fire actively.

Either way the targeting envelope would get smaller in the amount of Travel time needed to effectively hit a ever smaller & more manueverable target.
 
Last edited:
That 300,000 kilometer limit is for a single type X emitter...for a total of 196 type X emitters 4,200,000 kilometers is more likely. Which places it in the running for the Honor Harrington universe...


Don't underestimate what would be going on in Star Trek...

The real trick would be at that range targeting something small, like a Galaxy class starship.

A type 7 Shuttlecraft? Forget about it.

The problem is sensor jitter. Jitter is caused by vibrations on the firing ship. A child jumping down from an upper bunkbed will have a result.
 
That 300,000 kilometer limit is for a single type X emitter...for a total of 196 type X emitters 4,200,000 kilometers is more likely. Which places it in the running for the Honor Harrington universe...
That's not how that works IMO.
It doesn't matter if it's a single Phaser Emitter within the Phaser Array or if it's multiple.
The range doesn't change because you're in space, as long as you're firing into space, the particles have no fundamental range limit.

Sensors are a different thing, they function independently of the Phaser Array.

How dense your Particle Stream that you fired is & how much it has diverged over time & distance, that's a different story.

You can increase the initial Particle Stream Energy Density by combining multiple Emitters within the Phaser Array, that's a common feature.
But that has no affect on range.


Don't underestimate what would be going on in Star Trek...
I don't, but I still try to make it mesh with IRL physics as much as possible & basic principles of how those weapons should fire.

The real trick would be at that range targeting something small, like a Galaxy class starship.
Exactly, & with limited sensor accuracy at range, being able to hit something that small in the distant horizon is impressive enough.

A type 7 Shuttlecraft? Forget about it.
Not at 1 Light-Second, that's way too small.
Any Good internal ship's AI Auto-Dodge could easily jank the craft fast enough to avoid getting hit.

The problem is sensor jitter. Jitter is caused by vibrations on the firing ship. A child jumping down from an upper bunkbed will have a result.
Your ship must be incredibly light weight if that is going to affect your sensors.

Children jumping on their beds shouldn't have anywhere near the affect on your sensors.

Especially since good Sensors would have it's own shock & vibration dampeners on their end to help with sensor acccuracy.

Compare the mass & forces of a child jumping on a bed to the mass of the Enterprise-D.

It's not even comparable.

Anything that child does is negligible to the operations of the ship.
 
I always figured it was like the videogames, the beam shoots out like a noodle to a specific distance then just fades out.

If we're going to start applying realism to Star Trek space battles, not only is there the huge issue above of antimatter warheads causing only minor hull damage but the way ships move in space, which is about as realistic as there being sound and magical light sources.
 
The problem with beam jitter at range, is that small or very small affects get multiplied in their effects.
Stabilize weapon/sensor mounts can handle a great deal of massive movements, but very fine disturbances? I question that.

.In the early 1990s 'American Scientist' magazine had a very interesting article. Very interesting.

It was talking about traditional radar tracking systems.

At ten revolutions per minute you are updating the track once every six seconds. Problem: how do you know that you are tracking the same target? Answer: you don't, unless the following conditions are met: the physical size of the target is such that it doesn't matter to the update; and second that the rate of speed of the target is such that it doesn't move far in those six seconds.

This is why optical mice take 1,200 images per second. It reduces the error probability to something more reasonable.

Another problem is that for every target that you are tracking, the amount of computational power required goes up at the square of the number of tracks involved. Ten targets? One hundred times the amount of computer power required.

One reason why Tricorders may be a joke... Even using the F-15 Eagle model.

According to Tom Clancy's 'A Guided Tour of a Fighter Squadron' where one computer scans for other targets, while the other only tracks one target...this applies to only F-15As, and Bs. F-15Cs and Ds had more powerful computers.

Assuming that Roddenberry and company got some hints about the way military computer technology was going in the early to mid 1960s. A Duotronic system might merely be a set of twin Control Data Corporation CDC 6600s, or CDC 7600 computers, redesigned to optimize their particular requirements using a one hundred stage pipeline architecture. In other words a GPU optimized for Artificial Intelligence, allowing for the low clock speeds of 1960s computers...
 
Brisance from a strong beam turning rocks into shrapnel would be a mess.
But couldn't that be a intentional effect say if you wanted to create many more mini asteroids in a "UnUsually Dense Asteroid Field" & you were trying to chase down a escaping small vessel (shuttle sized) but the main StarShips couldn't practically enter the UnReasonably dense Asteroid field that we always seem to see in Sci-Fi?

So at super long range, you intentionally fire your beams to shatter those Asteroids and create many more mini asteroids to make manuevering through them unreasonably difficult?

The problem with beam jitter at range, is that small or very small affects get multiplied in their effects.
Stabilize weapon/sensor mounts can handle a great deal of massive movements, but very fine disturbances? I question that.
That's why AESA/PESA Radars were invented.

.In the early 1990s 'American Scientist' magazine had a very interesting article. Very interesting.

It was talking about traditional radar tracking systems.

At ten revolutions per minute you are updating the track once every six seconds. Problem: how do you know that you are tracking the same target? Answer: you don't, unless the following conditions are met: the physical size of the target is such that it doesn't matter to the update; and second that the rate of speed of the target is such that it doesn't move far in those six seconds.

This is why optical mice take 1,200 images per second. It reduces the error probability to something more reasonable.
That's also why fixed PESA/AESA Radars have become the dominant form factor.

Another problem is that for every target that you are tracking, the amount of computational power required goes up at the square of the number of tracks involved. Ten targets? One hundred times the amount of computer power required.
That's why modern "Many Core" computers have become a thing along with the exponential increase in computing power.

One reason why Tricorders may be a joke... Even using the F-15 Eagle model.
Ok, you need to explain this reference, I'm not getting it.

According to Tom Clancy's 'A Guided Tour of a Fighter Squadron' where one computer scans for other targets, while the other only tracks one target...this applies to only F-15As, and Bs. F-15Cs and Ds had more powerful computers.

Assuming that Roddenberry and company got some hints about the way military computer technology was going in the early to mid 1960s. A Duotronic system might merely be a set of twin Control Data Corporation CDC 6600s, or CDC 7600 computers, redesigned to optimize their particular requirements using a one hundred stage pipeline architecture. In other words a GPU optimized for Artificial Intelligence, allowing for the low clock speeds of 1960s computers...
But modern GPU's & GPU compute is incredibly wide & focused on Vector Number crunching.

That could solve most of those issues with modern day GPU's.
 
Last edited:
It doesn't make much sense to try and tie 1960's technology to Star Trek since the TOS writers did a fairly decent job of not being specific on how tech works in TOS.

As for a comparison in capabilities, in TOS' "The Gamesters of Triskelion" the Enterprise was able to scan the entire system of Gamma for the transporter party's atoms in about an hour. In TNG's "Relics" it would take seven hours for the E-D to scan the interior surface of the dyson sphere (equivalent to a system) for an opening to escape. You would think TNG's tech should be faster... :whistle:
 
It doesn't make much sense to try and tie 1960's technology to Star Trek since the TOS writers did a fairly decent job of not being specific on how tech works in TOS.

As for a comparison in capabilities, in TOS' "The Gamesters of Triskelion" the Enterprise was able to scan the entire system of Gamma for the transporter party's atoms in about an hour. In TNG's "Relics" it would take seven hours for the E-D to scan the interior surface of the dyson sphere (equivalent to a system) for an opening to escape. You would think TNG's tech should be faster... :whistle:
But isn't there alot of Urban / City Sprawl within the surface of the Dyson Sphere for the Enterprise-D's computer to parse through?

And the Star being in the center of the Dyson Sphere, that limits Enterprise-D's Sensor visibility while they slowly go around the Star to scan for things.
 
But isn't there alot of Urban / City Sprawl within the surface of the Dyson Sphere for the Enterprise-D's computer to parse through?

Sure there would be a lot to parse through looking for an opening but how is that worse than scanning a volume of space the size of a star system for atoms of the transporter party?

And the Star being in the center of the Dyson Sphere, that limits Enterprise-D's Sensor visibility while they slowly go around the Star to scan for things.

Like having the planet Gamma 2 that the Enterprise is orbiting, Gamma's star and other large bodies in the way of the Enterprise's scanners?
 
Sure there would be a lot to parse through looking for an opening but how is that worse than scanning a volume of space the size of a star system for atoms of the transporter party?
I'm assuming the space is pretty empty for the most part & you know what to scan for when it comes to the atoms of the transporter party.

And given that there was a civilization living on the inside of the Dyson Sphere along with it's left over cities, the scientists inside the Enterprise-D would want to take the time to catalog everything they can while they're there. Extra Curiosity to figure out every little detail to bring back to the folks at the UFP.

Like having the planet Gamma 2 that the Enterprise is orbiting, Gamma's star and other large bodies in the way of the Enterprise's scanners?
But they had no physical object in their way like with the Dyson Sphere affecting it's flight path and making navigation a bit trickier.

Was the USS Enterprise in working order at that time?

I know the Enterprise-D took some systems damage when getting dragged into the Dyson Sphere.
 
Read Tom Clancy's book.

As to the creators of Star Trek, they looked at a great deal of real world technology, to try to get a handle of understanding on where various technologies were going. So that they wouldn't look too foolish about it by "tomorrow's news".

Everything that they researched, in comparison what was current then. Very few people really saw too far down the line, and they had their expectations. This handicapped them to a certain extent.

For example, in the novel '2001:A Space Odyssey', the primary author Arthur C. Clarke postulated that there would be a third generational change by the 1980s.

Strangely enough he was right, just not in the way that he was expecting. First generation computers - vacuum tube, Second generation - transistor based systems, Third generation - commodity.

You should about now be going '????' Maybe even '?????". What??
In the 1980s there was this little problem with quality control of current high end chips , which is chip futures exist in the Futures market. Yields it turns out can't be perfect. Only at about the 80 percent level. This is third generation.
It is quite fascinating actually...

As Robin Williams said "Reality what a concept!" 'Mork and Mindy'.

We use what tools we have, which limits our understanding. Star Trek TOS is of the 1960s... they missed a great deal.

So do we.

The bridge of the Enterprise in SNW is really something, but what did they miss?
 
Yields it turns out can't be perfect. Only at about the 80 percent level. This is third generation.
Silicon Chip Yields is a entire topic in itself.
Mature Process Node Technologies can easily be > 90% Perfect with varying levels of perfection depending on what type of Transistors you're laying down and for what purpose.
But that's beyond the scope of this discussion because there are entire Process Node Engineers dedicated to perfecting the Yields issue.
TSMC is great at improving their Yield Rates over time as the Process Node matures.
 
I'm assuming the space is pretty empty for the most part & you know what to scan for when it comes to the atoms of the transporter party.

The Enterprise was looking for atoms against background radiation in the volume of a star system. Yes, they know what they are scanning for but it is alot of data to process in a short amount of time looking for very small targets.

And given that there was a civilization living on the inside of the Dyson Sphere along with it's left over cities, the scientists inside the Enterprise-D would want to take the time to catalog everything they can while they're there. Extra Curiosity to figure out every little detail to bring back to the folks at the UFP.

No, not this time because the Enterprise-D was under the gun to escape the interior of the Dyson Sphere they were trapped in before they were destroyed by a solar flare.

Both the Enterprise and Enterprise-D were operating in haste to scan a large volume of space. In "The Gamesters of Triskelion" the crew was afraid there was a transporter malfunction and were trying to locate the transporter party's atoms as fast as they could. In "Relics" the crew was trying to find an exit in the Dyson Sphere to escape the solar flare before their shields failed.

But they had no physical object in their way like with the Dyson Sphere affecting it's flight path and making navigation a bit trickier.

In this case the Enterprise-D is *inside* the Dyson Sphere and is scanning the interior of the surface for an exit large enough that the Enterprise-D can fit through.

Was the USS Enterprise in working order at that time?

Yes.

I know the Enterprise-D took some systems damage when getting dragged into the Dyson Sphere.

The Enterprise-D took damage to her warp and impulse power relays but her sensors were unaffected.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top