• merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    To me, the physics of the situation makes this all the more impressive.

    Voyager has a 23 watt radio. That’s about 10x as much power as a cell phone’s radio, but it’s still small. Voyager is so far away it takes 22.5 hours for the signal to get to earth traveling at light speed. This is a radio beam, not a laser, but it’s extraordinarily tight beam for a radio, with the focus only 0.5 degrees wide, but that means it’s still 1000x wider than the earth when it arrives. It’s being received by some of the biggest antennas ever made, but they’re still only 70m wide, so each one only receives a tiny fraction of the power the power transmitted. So, they’re decoding a signal that’s 10^-18 watts.

    So, not only are you debugging a system created half a century ago without being able to see or touch it, you’re doing it with a 2-day delay to see what your changes do, and using the most absurdly powerful radios just to send signals.

    The computer side of things is also even more impressive than this makes it sound. A memory chip failed. On Earth, you’d probably try to figure that out by physically looking at the hardware, and then probing it with a multimeter or an oscilloscope or something. They couldn’t do that. They had to debug it by watching the program as it ran and as it tried to use this faulty memory chip and failed in interesting ways. They could interact with it, but only on a 2 day delay. They also had to know that any wrong move and the little control they had over it could fail and it would be fully dead.

    So, a malfunctioning computer that you can only interact with at 40 bits per second, that takes 2 full days between every send and receive, that has flaky hardware and was designed more than 50 years ago.

    • flerp@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      And you explained all of that WITHOUT THE OBNOXIOUS GODDAMNS and FUCKIN SCIENCE AMIRITEs

      • KubeRoot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        Oh screw that, that’s an emotional post from somebody sharing their reaction, and I’m fucking STOKED to hear about it, can’t believe I missed the news!

    • chimasterflex@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Finally I can put some take into this. I’ve worked in memory testing for years and I’ll tell you that it’s actually pretty expected for a memory cell to fail after some time. So much so that what we typically do is build in redundancy into the memory cells. We add more memory cells than we might activate at any given time. When shit goes awry, we can reprogram the memory controller to remap the used memory cells so that the bad cells are mapped out and unused ones are mapped in. We don’t probe memory cells typically unless we’re doing some type of in depth failure analysis. usually we just run a series of algorithms that test each cell and identify which ones aren’t responding correctly, then map those out.

      None of this is to diminish the engineering challenges that they faced, just to help give an appreciation for the technical mechanisms we’ve improved over the last few decades

      • trolololol@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        pretty expected for a memory cell to fail after some time

        50 years is plenty of time for the first memory chip to fail most systems would face total failure by multiple defects in half the time WITH physical maintenance.

        Also remember it was built with tools from the 70s. Which is probably an advantage, given everything else is still going

        • orangeboats@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Also remember it was built with tools from the 70s. Which is probably an advantage

          Definitely an advantage. Even without planned obsolescence the olden electronics are pretty tolerant of any outside interference compared to the modern ones.

  • FlatFootFox@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    I still cannot believe NASA managed to re-establish a connection with Voyager 1.

    That scene from The Martian where JPL had a hardware copy of Pathfinder on Earth? That’s not apocryphal. NASA keeps a lot of engineering models around for a variety of purposes including this sort of hardware troubleshooting.

    It’s a practice they started after Voyager. They shot that patch off into space based off of old documentation, blueprints, and internal memos.

    • nxdefiant@startrek.website
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Imagine scrolling back in the Slack chat 50 years to find that one thing someone said about how the chip bypass worked.

        • jaybone@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          This is why slack is bullshit. And discord. We should all go back to email. It can be stored and archived and organized and get off my lawn.

          • deweydecibel@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            I mean, unironically, yeah.

            It’s not even that we need to go back to email. The problem isn’t moving on from outdated forms of communication, it’s that the technology being pushed as a replacement for it is throwing out the baby with the bathwater.

            Which is to say nothing of the fact that all of these new platforms are proprietary, walled off, and in some cases don’t make controlling the data easy if you’re not hosting it (and their searches are trash).

            • sudo42@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 months ago

              all of these new platforms are proprietary, walled off, and in some cases don’t make controlling the data easy if you’re not hosting it

              You’ve just discovered their business case. So many new businesses these days only insinuate themselves into an existing process in order to co-opt it and charge rents.

          • Artyom@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            It’s not Slack’s fault. It is a good platform for one-off messages. Need a useless bureaucratic form signed? Slack. Need your boss to okay the afternoon off? Slack. Need to ask your lead programmer which data structure you should use and why they’re set up that way? Sounds like the answer should be put in a wiki page, not slack.

            All workflows are small components of a larger workplace. Emails also suck for a lot of things. They probably wouldn’t have worked in this case, memos are the logical upgrade from emails where you want to make sure everyone receives it and the topic is not up for further discussion.

            • ferret@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago
              1. Don’t use google as your email provider
              2. Keep backups of your email (you can do this on gmail, too)
        • nxdefiant@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          IBM is 100, but the Internet didn’t exist in 1924, so we’ll say the clock starts in 1989. I’m pretty sure at least MS or IBM will be around in 15 years.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      To add to the metal, the blueprints include the blueprints for the processor.

      https://hackaday.com/2024/05/06/the-computers-of-voyager/

      They don’t use a microprocessor like anything today would, but a pile of chips that provide things like logic gates and counters. A grown up version of https://gigatron.io/

      That means “written in assembly” means “written in a bespoke assembly dialect that we maybe didn’t document very well, or the hardware it ran on, which was also bespoke”.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I realize the Voyager project may not be super well funded today (how is it funded, just general NASA funds now?), just wondering what they have hardware-wise (or ever had). Certainly the Voyager system had to have precursors (versions)?

      Or do they have a simulator of it today - we’re talking about early 70’s hardware, should be fairly straightforward to replicate in software? Perhaps some independent geeks have done this for fun? (I’ve read of some old hardware such as 8088 being replicated in software because some geeks just like doing things like that).

      I have no idea how NASA functions with old projects like this, and I’m surely not saying I have better ideas - they’ve probably thought of a million more ways to validate what they’re doing.

  • Rob@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    Interviewer: Tell me an interesting debugging story

    Interviewee: …

    • sudo42@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Heh. Years ago during an interview I was explaining how important it is to verify a system before putting it into orbit. If one found problems in orbit, you usually can’t fix it. My interviewer said, “Why not just send up the space shuttle to fix it?”

      Well…

  • FreeFacts@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    I wonder how it is secured, or could anyone with a big enough transmitter reprogram it at will…

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I think the security is adequately managed by the need for a massive transmitter as well as the question “what is there to gain via a hostile takeover and re-programming the probe?”

      I bet there’s actual security of some kind going on, but those two points seem like a massive hurdle to clear just to mess with a deep space probe.