Yes, I was in this situation and I did exactly that. You need a splitter and then moca adapters in the rooms (a bit expensive at least 5-6 years ago where I lived).
- 6 Posts
- 142 Comments
loudwhisper@infosec.pubto
Technology@lemmy.world•Atlassian goes cloud-only, customers face integration issuesEnglish
61·4 months agoNot to self-promote, but I have expressed my opinion on the topic.
Wait until you will need a team of people to optimize cloud costs.(finops) for peak irony.
loudwhisper@infosec.pubto
Technology@lemmy.world•Kick faces possible $49 M fine after French streamer Jean Pormanove dies on airEnglish
3·4 months agoIn this case I am quite happy to be out of the loop, frankly. I can live in blissful ignorance of at least this stuff.
loudwhisper@infosec.pubto
Technology@lemmy.world•Kick faces possible $49 M fine after French streamer Jean Pormanove dies on airEnglish
23·4 months agoI feel completely out of the loop when stuff like this happens.
I went looking around and found an article that expanded a lot on this topic, https://maxread.substack.com/p/who-killed-jean-pormanove
loudwhisper@infosec.pubto
Selfhosted@lemmy.world•Why are anime catgirls blocking my access to the Linux kernel?English
3·4 months agoExactly my thoughts too. Lots of theory about why it won’t work, but not looking at the fact that if people use it, maybe it does work, and when it won’t work, they will stop using it.
But the estimation is with each NC instance with half a CPU and 1GB of memory. This is a super conservative estimation, that doesn’t include anything besides a tiny Fargate deployment
and Aurora instances.Edit: fargate ($40/month), the tiniest Aurora instances at 20% utilization and with merely 50GB storage ($120/month). Missing s3, which will easily cost $50 in storage and transfer (for only a few TB), ALBs and network traffic, especially outbound (easily $50-100 depending on volumes).
This basic solution’s real cost is already between $150 and $300/month. I don’t know NC enough to understand volumes on DBs and all usage, but I assume that it’s going to be lots of data in and out (backups, media, etc.). —edit—
For a heavily used NC instance (assuming a company offering it as a service), the cost is going to become massive pretty fast.
Also, as I side note, if a company is offering NC as a service, but doesn’t manage a single piece of NC deployment… What is the company product? And most importantly, how are they going to make money when AWS is going to eat a linearly scalable chunk of their revenue forever?
Well yeah, wouldn’t break the bank, but a conservative cost estimate (without considering network costs, for example, quite relevant for a data intensive app) would bring this setup to about $40/month. That is about 5 times more expensive than a VPC with 4x the resources.
OP said this is some sort of “enterprise self-hosting” solution, which I guess then kind of makes sense. For a company providing nextcloud as a service I would never vendor lock myself and let AWS take a huge chunk of my revenue forever, but I can imagine folks have different opinions.
In that case, Pulumi permissions are too broad IMHO for what it has to do, an enterprise should adhere to least privilege. Likewise, as I wrote in another comment, the egress security groups are unclear to me (why any traffic at all is needed?) and the image consumed should be pinned to a digest. Or better yet, should be coming from a private enterprise registry, ideally with an attestation that can be verified at runtime.
I am not sure ECS Fargate makes sense vs an ec2 instance to run the workload. This setup alone will cost about $30/month assuming half a vCPU per replica with Fargate, plus about $12 for the memory (1GB/task). 2xt2.micro could be run for ~$20 without even considering reservation discounts etc. Obviously the gap will become even larger at scale, which I suppose might be very interesting for an enterprise.
Plus, at this point why not using directly managed Nextcloud (or alternatives)… If anyway you use a managed storage, runtime and database, in a vendor lock…
Oh yeah, I am aware. Mostly here I would question the idea to have multi-AZ redundancy and using a manage service for DB (which indeed is expensive). All of this when a 5$ VPS could host the same (maybe still using s3 for storage) and accept the few hours downtime in the rare event your VPS explodes and you need to restore it from a backup.
So from my PoV this is absolutely overkill but I concede that it depends a lot on the requirements. I can’t ever imagine having requirements so tight that need such infra to run (in fact, I think not even most businesses have these requirements, I have written on the topic at https://loudwhisper.me/blog/hating-clouds/) for my personal stuff…
Everyone is free to pick their poison, but I have to ask…why? What is the target audience here? This is a massively overkill architecture IMHO. Not to talk about the fact you now need 3 managed services (fargate, s3 and aurora at least) for a single self hosted tool, and that is being generous (not counting cloudwatch, ALBs, etc.).
- Why do you need security groups to allow egress anywhere (or, at all)?
- I would pin the image to a digest, rather than using latest.
- what is the average monthly cost for this infra for you?
loudwhisper@infosec.pubto
Technology@lemmy.world•European Commission has a "Wifi4EU" initative, provides 93k high-speed private access points across the EU, free of charge.English
4·5 months agoSomeone runs MongoDB unauthenticated, bound on 0.0.0.0 with production data, on a computer without a VPN, and the problem is the WiFi?
Like I get what you are saying, but this sounds like saying that we should ban speedbumps because imagine there is a guy with a loaded gun pointed at a kid with no safe, finger on the trigger, and high on coke, if the car hits the speedbump the toddler is gone. Yeah, but I would hardly say the speedump is the issue.
loudwhisper@infosec.pubto
Technology@lemmy.world•European Commission has a "Wifi4EU" initative, provides 93k high-speed private access points across the EU, free of charge.English
4·5 months agoThis is not really a common or easy attack, especially for any meaningful service (that is probably in preloaded HSTS lists).
It’s not like this is the only shared network. In airports millions of people everyday connect to the same network.
Email is almost always zero-access encryption (like live chats), considering the % of proton users and the amount of emails between them (or the even smaller % of PGP users). Drive is e2ee like chat history. Basically I see email : chats = drive : history.
Anyway, I agree it could be done better, but I don’t really see the big deal. Any user unable to understand this won’t get the difference between zero-access and e2e.
They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
How would you explain it in a way that is both nontechnical, accurate and differentiates yourself from all the other companies that are not doing something even remotely similar? I am asking genuinely because from the perspective of a user that decided to trust the company, zero-access is functionally much closer to e2ee than it is to “regular services”, which is the alternative.
Scribe can be local, if that’s what you are referring to.
They also have a specific section on it at https://proton.me/support/proton-scribe-writing-assistant#local-or-server
Also emails for the most part are not e2ee, they can’t be because the other party is not using encryption. They use “zero-access” which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.
See https://proton.me/support/proton-mail-encryption-explained
The email is encrypted in transit using TLS. It is then unencrypted and re-encrypted (by us) for storage on our servers using zero-access encryption. Once zero-access encryption has been applied, no-one except you can access emails stored on our servers (including us). It is not end-to-end encrypted, however, and might be accessible to the sender’s email service.
Over the years I’ve heard many people claim that proton’s servers being in Switzerland is more secure than other EU countries
Things change. They are doing it because Switzerland is proposing legislation that would definitely make that claim untrue. Europe is no paradise, especially certain countries, but it still makes sense.
From the lumo announcement:
Lumo represents one of many investments Proton will be making before the end of the decade to ensure that Europe stays strong, independent, and technologically sovereign. Because of legal uncertainty around Swiss government proposals(new window) to introduce mass surveillance — proposals that have been outlawed in the EU — Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move.
This shift represents an investment of over €100 million into the EU proper. While we do not give up the fight for privacy in Switzerland (and will continue to fight proposals that we believe will be extremely damaging to the Swiss economy), Proton is also embracing Europe and helping to develop a sovereign EuroStack(new window) for the future of our home continent. Lumo is European, and proudly so, and here to serve everybody who cares about privacy and security worldwide.
They actually don’t explain it in the article. The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats. The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.
Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.
I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.
Some powers are really strong (e.g., luck of the far realm). But what is really strong is the transformation. Fly every turn? Yes please.