• 1 Post
  • 102 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • it’s not geography aware, it’s network topology aware

    Yes, I’m using “geographic awareness” here as shorthand for the same algorithm that BGP uses to calculate shortest route. As far as I know, BGP has no knowledge of “countries” or “continents”, it makes decisions purely on local policy and connectivity info available to it. However, the resulting topology map does greatly resemble the corresponding geographic map, a natural consequence of the internet being a physical engineering structure. I’m not sure how publicly available the global BGP data is. If you were designing a backbone-bandwidth-preserving P2P app you would either give it BGP data directly, or if that’s not available, give it the world map to get most of the same benefit.

    topology that is often obscured by the ISPs for a variety of benign and malevolent reasons

    The multicast proposal would need to be routed through the very same ISP-obscured topology, so there is no advantage over topology-aware P2P.

    I’m not sure this math is mathing

    As a graph problem, it does look to me within factor of 2 is practical.

    First consider a hypothetical topology-aware “daisy chain” scheme, where every swarm user has upload ratio of exactly one. Then every backbone and last-mile connection gets used exactly twice. This is why I say factor of 2 is the upper limit. It’s like a maze problem where you can navigate an entire maze and only traverse each corridor twice. Then look at the more practical “pyramid” scheme where half the users have upload ratio of about 2. Some links get used twice but many get used only once! UK-UK1 link is the only one to be used 3 times. Notably observe that US-JP and US-UK transcontinental links only get used once, as you wanted! Overall this pyramid scheme looks to me to be within 20% efficiency of the optimal multicast scheme.

    we’re still using “someone else’s computer” … at “we’re” using “our computer” and that’s the royal “we”. Multicast is all switch no server, all juice, no seed

    What do you think backbone routers are? They are computers! Specialized for a particular task, but computers nonetheless. Owned by someone other than you. Your whole lament is that you can’t force those owners to implement multicast on their routers. I think using the royal “our” computer, something we can do right now without forcing anyone else, is much better by comparison. If you insist that P2P swarm members, they who actually want to see your livestream, are not good enough, that you only want to use “your” computer to broadcast and no one else’s, then you are left with no options other than bouncing HAM video signals off the ionosphere. And even the radio spectrum is claimed by governments.

    MBGP table will be megabytes long and extremely dynamic

    I think you underestimate the size. Imagine if multicast were ubiquitous, billions of internet-connected users each with dozens? hundreds? of multicast subscriptions. Each video content creator is a multicast, each blogpost you follow, each multi-twitter handle, each lemmy community you subscribe to. Hundreds easily. Thats many gigabytes, possibly hundreds of gigabytes, of state to fit into every router. BGP is simple because you care only about the physical links you actually have. You can stuff entire IP ranges into a single routing table entry. Your entire table could be a dozen entries. Fits inside the silicon. With multicast I don’t think you can fold it in, you must keep the entire many-to-many table on every single router[1]. And consult the 100GB table to route every single packet, in case it needs to get split. As you said, impossible with 1990s technology, probably possible but contrary to business goals in 2020.

    You are concerned about the battery life of your phone when you use the bandwidth of 2 video streams compared to watching just 1? Yet you expect every single router owner to plug in hundreds of gigabytes of extra RAM sticks and spend extra CPU power and electricity to look up routing tables to handle your multicast traffic for you. You are just offloading the resource usage onto other people’s computers! Not “our” computers - “theirs”. Remember how much criticism Bitcoin got for wasting resources? Not the proof of work, but the having to store a duplicate copy of 100GB’s of transactions blockchain on every single node? All that hard drive space wasted! When “Mastercard” and “Visa” can do it with only a single database on a mainframe. Yet now you want “them” to do the same and “waste” 100GB’s of RAM on every single router just so your battery life is a little better.

    If everyone suddenly used the internet to this full potential, then we would get the screws turned on us. … Multicast would essentially fly under the radar.

    This does not follow. Didn’t you say that multicast was already sabotaged by the very same cablo-distribution networks to maintain their send-monopoly? You expect to force the ISPs to turn multicast back on and somehow have it fly under the radar, but P2P would get the screws turned? It can’t be one and not the other! If you plan to have the governments force the ISPs to fall in line and implement multicast standards, then why couldn’t you have the same governments (driven by democratic pressure of billions of internet users demanding freedom, presumably) enshrine P2P rights? Again, remember that P2P is something we already have, something that already works and can be expanded with no additional cooperation from other players. Multicast is something that would need to be forced on others, on everyone, and require physical hardware updates. If there are future restrictions on P2P, they would be easier to defend against politically and technologically. If you cannot defend P2P, then you for sure do not have enough political power to force multicast.

    [1]: Thinking about this, maybe you could roll it in a little. Given N internet users (~a billion), each with S subscriptions (say a hundred), C number of content feeds (a hundred million? 10% of users are also creators, 90% are pure consumers), and each router has P physical links (say ten), then instead of N*S amount of state (100GB’s), each router could fold it down into C*P amount of state (1GB’s). As in “If I receive a multicast packet from [source ip=US.5.6.7] to [destination ip=anyone], route copies of it out through phy04, phy07, and phy12”. You would still need a mechanism to propagate table changes pretty rapidly (full refresh about once every minute?). Your phone can be switching cells or powering on and off. You don’t want to multicast packets to a powered-off IP - that would be waste of resources!

    And how do you detect oversubscribing? If a million watchers subcribe to 1 multicast livestream - it’s fine, but what happens when 1 troll subscribes to a million livestreams? If I subscribe to 1 million video streams, obviously my last-mile connection cannot fit them all. With TCP unicast, the senders would not receive TCP ACK replies from me and throttle down. But with multicast, the routers in between do not know about my last mile, or even if my phone is still powered on since later than a minute ago. All they know is “if receive multicast from IP1, send to phy04; if receive multicast from IP2, send to phy04;” etc. Would my upstream routers not get saturated trying to send a million video streams to a dead IP? Would we need to implement some sort of a reverse-multicast version of “TCP ACK”?


    1. 1 ↩︎


  • While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.

    Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?

    In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!

    It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.

    P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.

    Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.





  • It’s worse. They are saying that the EU copyright law, as written, only allows decompiling/reverse engineering to “fix bugs”. A bug fix would involve a software patch of some sorts. But the security researchers did not have time to write a patch yet, what they did is tell the customer “Yep, it’s fucked. Your vendor put in a killswitch to make the trains brick themselves.” So that does tell them where the problem is, but it is not a bona fide bug fix from the Bugfix region of France, and therefore illegal.


  • Newag [train maker] claims that the Dragon Sector [whitehat hacker] team endangered passengers’ safety by modifying the software without proper experience. But Newag then turns right around and claims that Dragon Sector did not modify the software at all. They point out that EU law only allows reverse engineering of software in order to fix bugs. And if Dragon Sector did not actually modify the software, it cannot have fixed any bugs, in which case their reverse-engineering must be illegal.




  • I know Lemmy hates AI, but this actually would be a perfect use for it. The problem is the idea of what an ad is. Yes, you could try to use secondary characteristics like image color or sound normalized volume (WhyTF do youtube ads still sound 3x louder than content? are we living in cable era again?), but they would be error-prone for any content more visually intense than a podcast. They would also not capture sponsorblock content like “I love showing you all these foreign countries but what I love even more is having my internet connection secure” that match the video flow. A crowdsourced lookup table of all known ad clip fingerprints would go a long way, until ad videos themselves start being AI-generated on the fly for that sweet personalization revenue.

    No, what I really want is to distill the idea of what I want to see into an AI and have it filter out what I don’t want to see for me. I know an ad when I see one, so AI can too. Pre-roll/mid-roll ads? Gone. Sponsorblock content? Gone. Like and subscribe? Skipped as if it didn’t exist. Virtual billboards on the sidelines of sporting events? Overlayed with kittens. Idiocracy banners squeezing the video from either side? Cropped and rescaled. Watermarks? Excised and content-aware-filled.

    The last frontier is when the content itself is secretly an ad, imprinting upon you some idea or point of view. You’ll have to watch out for that one on your own.


  • Ah, I can see OP’s line of thought now:

    • you have a point A’ on a plane and a random point A
    • you find a midpoint B and draw a sphere around it. A and A’ are now a diameter of the sphere
    • pick two random points D and C at the intersection of the plane and the sphere
    • by the “triangle inscribed in a circle/sphere where one side is a diameter” rule, such a triangle must be a right triangle
    • therefore both angles ACA’ and ADA’ are right angles
    • thus C and D both satisfy the conditions of the initial question (with all points renamed: A=P, (C or D)=H, A’=A)
    • OP never defined what a projection is, it being “4th grade math”, but one of the requirements is being unique
    • C and D cannot both be the projection, therefore the initial question must be answered “false”: just because AH is perpendicular to PH doesn’t make H a projection.

    I like treating posts as puzzles, figuring out thread by thread WTF they are talking about. But dear OP, let me let you know, your picture and explanation of it are completely incomprehensible to everyone else xD. The picture is not an illustration to the question but a sketch of your search for a counterexample, with all points renamed of course, but also a sphere appearing out of nowhere (for you to invoke the inscribed-triangle-rule, also mentioned nowhere). Your headline question is a non-sequitur, jumping from talking about 4D (never to be mentioned again) into a ChatGPT experiment, into demanding more education in schools. You complain about geometry being hard but also simple. The math problem itself was not even your question, yet it distracted everyone else from whatever it is you were trying to ask. If you ever want to get useful answers from people other than crazed puzzleseekers like me, you’ll need to use better communication!


  • In the ultimate, you’d need to do something like run a headless browser in a virtual machine, have it play out and record the entire video, then use something like AI to splice out the ad segments and distracting elements (a souped-up sponsorblock will work for a while, but eventually ads will be injected into the raw video stream at random intervals), and present the pristine finished content to you. Basically we are going to re-invent TiVo all over again xD.

    In worst case, you can’t start watching until the pre-roll ad timers expire. This is how adblocking works on Twitch streams currently - you can only see a purple screen even if you block the ads.

    And yes, the headless browser will need to use AI for human-like mouse movement and to solve captchas - basically whatever state-of-the-art technologies spammers and scrapers are already currently using.

    Google is anticipating this future and is trying to implement and force hardware-based DRM for web video before then.





  • It’s a tarpit. If they simply displayed a blocked “no vids for u” message, you’d get outraged, go complain online, look for workarounds, and eventually find a bypass. If everything still works but poorly, you get annoyed, turn off your adblocker to troubleshoot, possibly blame the adblocker for being “buggy” and keep it off. Their help page solution implies they are hoping for just that. There is no “smoking gun” blocked message to go complain online about, even though it is indeed their servers that are degrading your connection on purpose in secret. Or maybe you give up and leave their ecosystem entirely, which is no big loss for them.

    The proper solution is to develop an adblock that they cannot detect is blocking ads. This may require actually downloading the ad video in background, and then lying that the video has played.


  • TauZero@mander.xyztoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    7 months ago

    it might detect those elevated stress levels [for callers] and it will automatically default going to a human being

    Damn. I get ice cold emotionless during an emergency, going straight to the point of reciting location and event when calling 911. Now I will have to also remember in the back of my mind to throw in a wavering voice and a couple of shrieks maybe to have my call routed properly. What a future.


  • Then you’d be surprised when you calculate the numbers!

    A Falcon 9 delivers 13100kg to LEO and has 395,700kg propellant in 1st stage and 92,670kg in 2nd stage. Propellant in both is LOX/RP-1. RP-1 is basically long chains of CH2, so together they burn as:

    3 O2 (3x32) + 2 CH2 (2x14) -> 2 CO2 (2x44) + 2 H2O (2x18)
    

    Which is 2*44/(2*44+2*18) = 71% CO2. Meaning each launch makes (395700+92670)*.71 = 347 tons CO2 or 347/13.1 = 26.5 tons of CO2 per ton to orbit. A lot of it is burned in space, but I’m guessing the exhaust gases don’t reach escape velocity so they all end up in the atmosphere anyway.

    As for how much a compute satellite weighs, there is a wider range of possibilities, since they don’t exist yet. This is China launching a test version of one, but it’s not yet an artifact optimized for compute per watt per kilogram that we’d imagine a supercomputer to be.

    I like to imagine something like a gaming PC strapped to a portable solar panel, a true cubesat :). On online shopping I currently see a fancy gaming PC at 12.7kg with 650W, and a 600W solar panel at 12.5kg. Strap them together with duct tape, and it’s 1000/(12.7+12.5)*600 = 24kW of compute power per ton to orbit.

    Something more real life is the ISS support truss. STS-119 delivered and installed S6 truss on the ISS. The 14,088kg payload included solar panels, batteries, and truss superstructure, supplying last 25% of station’s power, or 30kW. Say, double that to strap server-grade hardware and cooling on it. That’s 1000*30/(2*14088) = 1.1kW of compute per ton to orbit. A 500kg 1kW server is overkilling it, but we are being conservative here.

    In my past post I’ve calculated that fossil fuel electricity on Earth makes 296g CO2 per 1 kilowatthour (using gas turbine at 60% efficiency burning 891kJ/mol methane into 1 mol CO2: 1kJ/s * 3600s / 0.6 eff / (891kJ/mol) * 44g/mol = 296g, as is the case where I live).

    The CO2 payback time for a ton of duct taped gamer PC is 1000kg * 26.5kg CO2/kg / ( 24kW * 0.296kg/kW/hour) / (24*365) = 0.43 years. The CO2 payback time for a steel truss monstrosity is `1000kg * 26.5kg/kg / (1.1kW * 0.296kg/kW/hour) / (24*365) = 9.3 years.

    Hey, I was pretty close!