as an illustration of our player-centric approach
Or in other words: We lots loads of money when people didn’t flock to our exclusive platform like we wanted and it seems they don’t like being squeezed for every penny they have.
as an illustration of our player-centric approach
Or in other words: We lots loads of money when people didn’t flock to our exclusive platform like we wanted and it seems they don’t like being squeezed for every penny they have.
You should not be struggling most of the time when using the CLI. Basic uses is just as easy as any GUI. Learning the commands might be a bit more involved and you need to be a bit more proactive about it. Anything you need to do 30+ times a day you should be over the learning curve of and can just execute them just as quickly if not quicker than using GUI. Especially when you look at tab competition and the reverse history search.
But what using the CLI more often does teach you is how to lessen that initial learning curve. Making you quicker at finding the new commands you need and how they work slowly building up your tool belt of knowledge about the commands you do look up.
I see nothing in this graphic that isn’t easy to do with a gui.
I didnt say the GUI was not easy for the common stuff. But I think the CLI is also easy for the common stuff so there is not much advantage other than a bit of a learning curve with the CLI. But the big thing that GUIs make harder is automation of common things. For instance, when I want to create a PR I like to rebase to the latest upstream. In a GUI that is a bunch of button clicks. With the cli I just <CTRL+R>pus
and that will autocomplete to git pull --rebase=interactive --autostash && git push && gh pr create --web
and I am landed in a web browser ready to review and submit my PR. Doing the same thing in a GUI takes a lot longer with a lot more clicking.
And that is a very common command for me.
Like logging and diffing is just so much easier when I can just scroll and click as opposed to having to do a log command, scroll, then remember the hashes, and then write the command.
Never found that to be a big issue. Most of the time when you want a diff you want to diff local changes or staged changes which is simply git diff
and git diff --staged
neither of those are hard or any real easier in the GUI (especially with bash history). For diffing specific commits I dont find that hard either just git log --oneline
and find the commits (and you can use grep to filter things out easily as well here) - typically does not require scrolling at all. Then git diff <copy paste>..<copy paste>
. In the GUIs you are often scrolling through commits you want to select at some point so I dont see how that saves you any real time here. I would not say the CLI or GUI is vastly easier in this case. And even in this case it is rare to need to do. Far more often is just branches which on a decent shell can be tab completed for convenience.
And sometimes I watch beginners use the gui and I have to bite my tongue because I know it would be faster in the cli.
This is why I prefer the CLI for common stuff. It is just faster.
But, especially for a beginner, i strongly recommend a gui.
And that is where I disagree. I think beginners should spend some time learning the tools they will need to use. IMO the CLI is critical for developers to learn and the sooner the better. So many things a vastly easier with the CLI than GUIs and a lot of stuff is near impossible with GUIs. Automation being a big one. I have not seen a good CI system that is GUI focused that you never need to know the cli for. Or when you have a repetitive task then it is quicker to write a quick script and run that then doing the same thing over and over in the GUI. Repeating actions is also easier in the CLI. All of these apply to more than just git as well.
I have seen so many beginners start with GUIs that don’t really understand what they are doing in git. And quite often break things and then just delete and recreate the repo and manually make their changes again. I find people that never bother with the CLI always hit a ceiling quite quickly in terms of their ability and productivity.
The only real thing that makes the CLI worst is that it has a steeper learning curve. Once you are over that hill I find it to be vastly better for more situations or at least not practically any worst than a GUI equivalent. So that hill is one well worth climbing.
I can always use a GUI if I really needed to. But those that only know the GUI will have a very hard time on the CLI when they need to - which is required far more often than the other way round.
Not sure I would say that is a rebase failing - just you messing things up. Can happen with any merge. But yeah that is a place where reflog would be useful. But I dont see why it would be on the cheat sheet instead of a git rebase --abort
or be rebase specific.
Typically because you have been leaning on the GUI for ages and don’t know the CLI well enough to do the easy stuff quickly let alone the advanced stuff at all. Or are even aware of what you can do with the CLI. And if you do know the CLI well enough you tend to find it just faster to work with and easier to automate things.
GUIs tend to only cover the common/basic usage. Which is easy to remember without a cheat sheet. When you need more advanced stuff then GUIs tend to become more of a sticking point I find. And with common workflows it is far easier to automate with the CLI then with a GUI.
The only time I see a rebase fail is due to a conflict. Which can be aborted with git rebase --abort
no need for reflogs unless you really mess things up.
The known unknowns and especially the unknown unknowns never get factored into an estimate. People only ever think about the happy path, if everything goes right. But that rarely every happens so estimates are always widely off.
The book How Big Things Get Done describes a much better way to factor in everything without knowing all the unknowns though - Just look a previous similar projects and look how long they took, take the average and bounds then adjust up or down if you have good reason to do so. Your project will very likely take a similar amount of time if your samples are similar in nature to your current task. And the actual time already factors in all the issues and problems encountered and even if you don’t hit all the same issues your problems will likely take a similar amount of time. And the more previous examples you have the better these estimates get.
But instead of that we just pluck numbers out of the air and wonder why we never hit them.
The social aspect of going into a dark room to watch a screen in silence? Vs talking a joking around on voice chat?
Just factor it into your estimates and make it a requirement to the work. Don’t talk to managers as though it is some optional bit of work that can be done in isolation. If you do frequent refactoring before you start a feature then it does not add a load of time as it saves a bunch of time when adding the feature. And helps keep your code base cleaner over the longer term leading to fewer times you need to do larger refactors.
People said the same thing about tweet when twitter first came out.
They only need it to pass once, we need it to be rejected every single time.
And other browsers can be configured to do the same. Though that is not ublock origin doing anything with the cookies and these settings can be enabled wtihout ublock (though you likely want ublock if you are enabling them).
I don’t think it does anything with cookies directly. It just blocks connections to domains and removes elements from pages that match patterns you give it. Removing the cookies/privacy banners does just that - removes the banner. This SHOULD opt you out of tracking as the laws generally require explicit permission, so not clicking the accept button should be enough. But if the sites follow those laws or not is a completely different matter.
Third party tracking cookies are normally blocked by their domain - when a tracking pixel is on the screen it reaches out to a known tracking domain which logs this visit and drops a cookie for that domain on the page. By blocking that domain the tracking request is never made and thus no cookie is dropped and so there is nothing to track you. Most tracking is done like this so it is quite effective. But it wont stop a first party cookie from being dropped or tracking done through that or any other data you send.
Note that the laws don’t require permission for all cookies. Ones that are essential to the sites function (like a cookie that carries login info) are typically allowed and cannot be opted out of (you can always delete cookies locally though, the laws just cover what sites can use). And not all sites will respect these laws or try to skirt around them so none of this is 100% perfect by any means.
How do we know these are the AI chatbots instructions and not just instructions it made up? They make things up all the time, why do we trust it in this instance?
If a family member gets banned for cheating while playing your copy of a game, you will also be banned in that game.
This sucks.
Yeah, but I an see why as it would be easy to abuse. Only need one copy of the game and you could cycle accounts that never owned the game out of the family sharing when they get banned.
Might be other ways to limit that, but would also likely need more restrictions on the feature that might be more annoying.
Oh, just invest in adobe and get it developed for Linux - easy, why didnt anyone think of this before. And better yet, if they do invest they could make it a PopOS exclusive!!?!?!! \s
It wont work because Adobe does not care and there is not enough market share in Linux for them to bother with it. No amount of money that PopOS has will be able to convince Adobe to develop it for Linux and there is no way in hell Adobe will give them access to their source to develop it for Linux. That whole argument is just a non-starter.
“We had relied and started to rely too much this year on self-checkout in our stores,” Vasos told investors. “We should be using self-checkout as a secondary checkout vehicle, not a primary.”
That is the key point here. Use them to replace the express lanes but dont replace all checkout points with them.
they actually increase labor costs thanks to employees who get taken away from their other duties to help customers deal with the confusing and error prone kiosks
Now that is bullshit… how can it cost more to have someone spend part of their time to help a customer when they have a problem vs having an extra person help them full time during checkout.
Still, 60% of consumers said they prefer self-checkout as of 2021, presumably because they’ve never seen Terminator (wake up sheeple).
WTH… I really don’t understand why this person hates them so much. Seems to have some hidden agenda but I cannot for the life of me tell what it is.
Additionally any application using a GUI toolkit (like kde, qt or gtk etc) only needs to to update to a version that has native Wayland support. Which means most applications already support it. At least if they don’t use any X11 APIs directly (which is not that common).
Plus this applies to your family as well. DNA is shared and by you giving it up you give up info about those related to you as well.