![](/static/f79995a8/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0d5e3a0e-e79d-4062-a7bc-ccc1e7baacf1.png)
Oops. Good to know… I guess the main thing was simply that there was a BK in the right place relative to the 9792 km arc then.
Oops. Good to know… I guess the main thing was simply that there was a BK in the right place relative to the 9792 km arc then.
This might be a stretch but McD can be franchised. If one franchisee pays top dollar for ad placement and other nearby franchises don’t, it would be profitable for them to send you to that franchisee even if it’s further.
…that being said I’m probably reading too much into it. Probably just your usual Google jank.
I’m not the person who found it originally, but I understand how they did it. We have three useful data points: you are 2.6 km from Burger King in Italy, that BK is on a street called "Via " and you are 9792 km from Burger King in Malaysia.
It’s not perfect but it works well! This is the principle of how your GPS works. It’s called triangulation. We only had distance to two points and one of them doesn’t tell us the sub-kilometer distance. If we had distance to three points, we could find your EXACT location, within some error depending on how detailed the distance information was.
You say “only” 6 months ago but it’s surprising to me just how quickly this time has passed.
I was a Reddit every day user pre-Lemmy. I happened to get linked to something there yesterday and saw all my sub’s “last visited” dates at 6 months. It’s crazy how easy it was to go cold turkey and I haven’t seen a need to go back.
Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.
GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.
If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.
Image generation tech has gone crazy over the past year and a half or so. At the speed it’s improving I wouldn’t rule out the possibility.
Here’s a paper from this year discussing text generation within images (it’s very possible these methods aren’t SOTA anymore – that’s how fast this field is moving): https://openaccess.thecvf.com/content/WACV2023/html/Rodriguez_OCR-VQGAN_Taming_Text-Within-Image_Generation_WACV_2023_paper.html
Although it’s a little less convenient than just remembering across refreshes, you can set a default sort in your profile if there’s a particular sort you like.
That being said I find myself hopping between Top Day and Hot so I can only start on one or the other. Would be nice if it just remembered.
Instance friendly link: !programming_horror@programming.dev