• 0 Posts
  • 15 Comments
Joined 6 months ago
cake
Cake day: July 23rd, 2025

help-circle
  • This is my first time doing pass through and idk what other optimizations I could do or if I’ve done anything wrong/sub optimal. I’d like your input. I’ll send you a message later

    For anyone curious if this is an option (running windows in a VM for audio), I’ve had good luck so far. Feels native, but I have more tests to do before I say it’s viable. Without passing through (iommu) the audio interface, it’s unusable.

    I installed Linux and set up a Win 10 pro VM yesterday. I did GPU pass through (although I doubt that was necessary). I also did pass through of my audio interface (MOTU 8pre-es). Windows saw it and it showed up in the proprietary MOTU software after installing the drivers in the VM, however I couldn’t get audio to play. I then tried passing through the USB root hub (built in to mother board) that the MOTU was connected to and then it worked. It worked just as it would running on bare metal. I tried playing a couple projects in Cubase and had no audio dropouts. Cubase has a meter that shows you if you missed audio buffer deadlines and why (CPU, disk) and it didn’t, to my surprise.

    Things I still want to try:

    1. How low can I get my buffer size / ASIO latency?
    2. Can it handle 192KHz sample rate and at what buffer size? The tests I did yesterday were 48k and 44.1k projects.
    3. How does it feel (in terms of latency) when using a MIDI controller keyboard.
    4. Can I do multi channel recording without dropouts?
    5. Does the VM break Cubase’s audio latency compensation when recording (this determines recording latency and automatically aligns the recording to where yiud expect it tobel). I have a feeling that the VM may introduce a latency that Cubase doesn’t account for.
    6. Does iLok or another license that I need fail in VM? I only used Steinberg licenser based software yesterday.
    7. Probably some other stuff I’ve not thought of yet.

    What I’ve not figured out is a way to sort of boot my existing windows install. I’m sure there’s a way, but idk. I know it’s possible to pass an entire disk to the VM, but my host Linux install is on the same nvme. I guess what I’d want is a way to create a virtual block device that maps the other partitions from the nvme and then pass that virtual block device to the VM. Alternatively, install Linux to a different drive, but I’d rather not buy another nvme at this time.


  • They also dropped support for older plugins of which I have a lot. This is a big issue for audio stuff IMO. Apple breaks backwards compatibility frequently, which has some benefits, but commercial audio plugins are expensive and updates generally aren’t free. I actually have a bunch of very old plugins that were free, but no longer supported. Many were windows only and I can still run them roughly 15-20 years later, but even the ones that were released for Mac, I have no hope of running.

    If you’re doing audio work professionally, you probably keep buying updates for your plugins, so Mac is probably a good choice. I don’t even release music (I just make noise). I’m just a hobbiest (with some higher end equipment and software). There’s a lot of hobbiests who wouldn’t be able to afford the software update costs (ignoring the Apple hardware costs). Depending on the plugin libraries, it’s bigger than the Apple hardware costs. Granted there are some really good free plugins, but some of the really popular stuff isn’t.




  • No reason it can’t be done on 120v (from a technical level). In fact, most solar inverters in the US could do this at a technical level as they basically do the same thing, just on a larger scale (higher current and therefore are wired in to electrical panels rather than through outlet as outlets have lower current limits). All you need is the inverter to synchronize its AC output to match grid. If you had a smaller inverter, you could just connect it to an outlet (ignoring building codes, insurance, and other non technical reasons). So the choice is then to have centralized larger inverters or smaller inverters per panel or 2. If you live in a very densely populated area where you can only pit a panel or 2 on a balcony or you don’t have control of your electrical panel, then the small inverter method makes sense.


  • I think you’re on to something, but sort of accidentally. A couple replies to you are saying it’s not possible, but I think they’re making an assumption that is not correct in many cases.

    The replies is saying it’s not possible because the layers are flattened before passed to the compression, thus the uncensored/unredacted data is not part of the input to the compression and therefore cannot have any impact on its output. This is true assuming you are starting with an uncompressed image.

    Here’s a scenario where the uncensored/unredacted parts of the image could influence the image: someone takes a photo of their ID, credit card, etc. It’s saved in a lossy compressed format (e.g. JPEG), specifically not a lossless format. They open it in an image editing tool to 100% black out some portion, then save it again (doesn’t actually matter the format). I feel lile someone is going to think I’m misunderstanding if I don’t explain the different output scenarios.

    First is the trivial case: amultilayer output with the uncensored/unredacted data as its own layer. In this case, its trivial to get the uncensored/unredacted data as it is simply present and visible of you use a tool that can show the individual layers, but the general assumption is that this is not the case – that the output is a single layer image, in which we have 2 scenarios.

    Second case: lossy compressed original, lossless censored. Consider that this censored/redacted image is flattened and saved as a lossless format such as PNG. Certainly there will be no compression artifacts of the uncensored/redacted data both because it is lossless (no artifacts added by PNG) and that it was flatted prior to being passed to PNG. However, the uncensored/unredacted artifacts remain in the uncensored/unredacted portions of the image. These were introduced by the compression that was applied prior to the censoring (e.g. the JPEG compression that contained the pre censored image). I suspect this is actually a common case.

    Third case: lossy compressed original, lossy compressed censored: same as second case, except now you have additional artifacts, in particular you bow have artifacts from the censored portion, and the artifacts of the previous lossy compression are also adding additional artifacts. This is probably more difficult, but the point is that the original uncensored/unredacted artifacts are still present.




  • Slackware was my first and I didn’t know that package managers existed (or maybe they didn’t at the time) to resolve dependencies and even if they did, they probably lagged on versions. I learned true dependency hell when trying to build my own apache, sendmail, etc from source while missing a ton of dependency libraries (or I needed newer versions) and then keeping things relatively up to date. Masochistic? Definitely for me, but idk how much of that was self inflicted by not using the package tool. Amazing learning at the time. This would have been mainly Slackware 3.x and 4.x. I switched to Debian (not arch BTW).


  • It could be, but they seem to get through Cloudflare’s JS. I don’t know if that’s because Cloudflare is failing to flag them for JS verification or if they specifically implement support for Cloudflare’s JS verification since it’s so prevalent. I think it’s probably due to an effective CPU time budget. For example, Google Bot (for search indexing) runs JS for a few seconds and then snapshots the page and indexes it in that snapshot state, so if your JS doesn’t load and run fast enough, you can get broken pages / missing data indexed. At least that’s how it used to work. Anyway, it could be that rather than a time cap, the crawlers have a CPU time cap and Anubis exceeds it whereas Cloudflare’s JS doesn’t – if they did use a cap, they probably set it high enough to bypass Cloudflare given Cloudflare’s popularity.


  • Is there a particular piece? I’ll comment on what I think are the key points from his article:

    1. Wasted energy.

    2. It interferes with legitimate human visitors in certain situations. Simple example would be wanting to download a bash script via curl/wget from a repo that’s using Anubis.

    3A) It doesn’t strictly meet the requirement of a CAPTCHA (which should be something a human can do easily, but a computer cannot) and the theoretical solution to blocking bots is a CAPTCHA.

    and very related

    3B) It is actually not that computationally intensive and there’s no reason a bot couldn’t do it.

    Maybe there were more, but those are my main takeaways from the article and they’re all legit. The design of Anubis is in many respects awful. It burns energy, breaks (some) functionality for legitimate users, unnecessarily challenges everyone, and probably the worst of it, it is trivial for the implementer of a crawling system to defeat.

    I’ll cover wasted energy quickly – I suspect Anubis wastes less electricity than the site would waste servicing bot requests, granted this is site specific as it depends on the resources required to service a request and the rate of bot requests vs legitimate user requests. Still it’s a legitimate criticism.

    So why does it work and why am I a fan? It works simply because crawlers haven’t implemented support to break it. It would be quite easy to do so. I’m actually shocked that Anubis isn’t completely ineffective already. I actually was holding out bothering testing it out because I had assumed that it would be adopted rather quickly by sites and given the simplicity in which it can be defeated, that it would be defeated and therefore useless.

    I’m quite surprised for a few reasons that it hasn’t been rendered ineffective, but perhaps the crawler operators have decided that it doesn’t make economic sense. I mean if you’re losing say 0.01% (I have no idea) of web content, does that matter for your LLMs? Probably if it was concentrated in niche topic domains where a large amount of that niche content was inaccessible, then they would care, but I suspect that’s not the case. Anyway while defeating Anubis is trivial, it’s not without a (small) cost and even if it is small, it simply might not be worth it.

    I think there may also be a legal element. At a certain point, I don’t see how these crawlers aren’t in violation of various laws related to computer access. What i mean is, these crawlers are in fact accessing computer systems without authorization. Granted, you can take the point of view that the act of connecting a computer to the internet is implying consent, that’s not the way the laws are, at least in the countries I’m familiar with. Things like robots.txt can sort of be used to inform what is/isn’t allowed to be accessed, but it’s a separate request and mostly used to help with search engine indexing, not all sites use it, etc. Something like Anubis is very clear and in your face, and I think it would be difficult to claim that a crawler operator specifically bypassed Anubis in a way that was not also unauthorized access.

    I’ve dealt with crawlers as part of devops tasks for years and years ago it was almost trivial to block bots with a few heuristics that would need to be updated from time to time or temporarily added. This has become quite difficult and not really practical for people running small sites and probably even for a lot of open source projects that are short on people. Cloudflare is great, but I assure you, it doesn’t stop everything. Even in commercial environments years ago we used Cloudflare enterprise and it absolutely blocked some, but we’d get tons of bot traffic that wasn’t being blocked by Cloudflare. So what do you do if you run a non-profit, FOSS project, or some personal niche site that doesn’t have the money or volunteer time to deal with bots as they come up and those bots are using legitimate user-agents coming from thousands of random IPs (including residential! – it used to be you could block some data center ASNs in a particular country until it stopped).

    I guess the summary is, bot blocking could be done substantially better than what Anubis does and with less down side for legitimate users, but it works (for now), so maybe we should only concern ourselves with the user hostile aspect of it at this time – preventing legitimate users from doing legitimate things. With existing tools, I don’t know how else someone running a small site can deal with this easily, cheaply, without introducing things like account sign ups, and without violating people’s privacy. I have some ideas related to this that could offer some big improvements, but I have a lot of other projects I’m bouncing between.


  • A friend (works in IT, but asks me about server related things) of a friend (not in tech at all) has an incredibility low traffic niche forum. It was running really slow (on shared hosting) due to bots. The forum software counts unique visitors per 15 mins and it was about 15k/15 mins for over a week. I told him to add Cloudflare. It dropped to about 6k/15 mins. We excitemented turning Cloudflare off/on and it was pretty consistent. So then I put Anubis on a server I have and they pointed the domain to my server. Traffic drops to less than 10/15 mins. I’ve been experimenting with toggling on/off Anubis/Cloudflare for a couple months now with this forum. I have no idea how the bots haven’t scrapped all of the content by now.

    TLDR: in my single isolated test, Cloudflare blocks 60% of crawlers. Anubis blocks presumably all of them.

    Also if anyone active on Lemmy runs a low traffic personal site and doesn’t know how or can’t run Anubis (eg shared hosting), I have plenty of excess resources I can run Anubis for you off one of my servers (in a data center) at no charge (probably should have some language about it not being perpetual, I have the right to terminate without cause for any reason and without notice, no SLA, etc). Be aware that it does mean HTTPS is terminated at my Anubis instance, so I could log/monitor your traffic if I wanted as well, so that’s a risk you should be aware of.


  • Depending on your comfort level, you may want to do what I’m in the process of doing. I’m still waiting on parts, but this will work for my heating system.

    I have old 2 wire thermostats in a few places I want to replace. I have hot water baseboard heat with multiple heating zones. I couldn’t find an existing solution that worked the way I wanted and was reasonably priced, so I decided to make my own. This only works for single stage systems and for which exhaust fans, circulation pumps, or other components are controlled by the heating system generally and not by a single specific thermostat, which if you have those old mechanical 2 wire thermostats is almost certainty the case. You could do more sophisticated, but I don’t need to.

    All I need is a relay (controlled by HA) to simulate the thermostat turning on/off. I also need some way to tell it when to turn it on/off (such as a temp sensor), again lots of options with HA.

    This can be done in a variety of ways, but I’m using nodemcu boards (they have wifi onboard) and esphome firmware. I’ve used this combination for a number of HA integrations so far. Near my boiler where all of the old thermostats connect will be a nodemcu board with multiple independently controlled relays (for each thermostat to control the individual heating zones).

    The 2 wires that go to my old thermostats will be power supply for separate nodemcu boards, which will be in a 3d printed case along with buttons, display, and (in one room) will also include a temp/humidity sensor since I don’t already have one there. The other locations already have more sophisticated air quality sensors that include temp/humidity, so no need to duplicate, although maybe I will for redundancy.