Pre Omnis Studio 8.x on OSX soon to be obsolete.
On Mon, Feb 19, 2018 at 5:59 AM, Phil (OmnisList) via omnisdev-en < firstname.lastname@example.org> wrote:
> Hi Clifford,
> So, these are just shared folders on a server accessed by SSH?
> Access is totally controlled by the file ‘authorised_users’.
> I guess you’d need admin access to get to that file.
You do not need admin access to get to that file. The server in question
does not even have to be one on which you have root access. You just need
an account. Let’s say that you have an account, ppotter, on example.com and
that you or someone else has configured remote key-based authentication for
you. To get a shell on that machine, you’ll need to unlock your private key
on your local machine. I use an SSH agent to do that on Linux so I just
type “ssh-add” and type my very long and impossible-to-guess passphrase to
unlock my private key. That works on macOS, too. On Windows, I use Pageant,
which is part of the (free) PuTTY suite. Once you have your private key
unlocked, you can now do “ssh email@example.com” and you will get a shell
on your remote server. “Remote” could be next to your knee or on the other
side of the world. It’s somewhere across the network. In this case, let’s
pretend that “example.com” is somewhere on the Internet because you’ll want
your team members to be able to access the same server.
Let’s say your git repo is at /home/ppotter/git-repos/some-repo.git and you
want to grant access to me to be able to clone, push, pull, etc. to that
repo because we are collaborators on this project. I send you my public
key, cilkay_rsa.pub. There is no magic in the name. It can be called
anything. You can upload that key to the server in any number of ways.
Let’s say we’re going to use SCP so “scp /path/to/cilkay_rsa.pub
firstname.lastname@example.org:” will put my public key in /home/ppotter/.
Get a remote shell on example.com if you don’t already have one by doing
“ssh email@example.com“. Note: if you are using Linux or macOS and your
local user account is “ppotter”, you don’t have to specify the “ppotter@”
part. Doing “ssh example.com” will work. Assuming you’re at the root of
your home directory, now add my public key to authorized_keys by doing “cat
cilkay_rsa.pub >> .ssh/authorized_keys”. I now have access to the git repo
so I can do “git clone firstname.lastname@example.org:git-repos/some-repo.git” and
have a local clone of that repo. From here on, it’s no different than using
any other git repo. I can branch, commit, push, pull, merge, etc.
The way this is configured, I have full shell access to your account on
that server. That may not be desirable so you can restrict the things that
someone using my key, hopefully only me, can do. I have keys that can do
“rsync” but nothing else. I have keys that can use “git” and nothing else.
I also have keys that have full shell access.
If you have root access on example.com, “ppotter” might have more
privileges than would be prudent to share. It might, for example, be in the
sudoers group so unless you really trust everyone who can get a shell as
ppotter, you should not be sharing this account. It’s not just about
trusting people to not do something malicious. It’s also about trusting
them to not to inadvertently break your server. It’s best to create new
account which is not in the sudoers group for the git repos, say “git”, and
do what I outlined above and still restrict shell access unless people
actually need it. There are a number of ways to restrict shell access and
the way you choose depends on what you’re trying to accomplish. Maybe you
want people to be able to “git clone” but not “git push”. Maybe you want
them to be able to do anything with git, rsync, and psql but nothing else.
Maybe you have a key you use strictly for invoking remote PostgreSQL dumps
(I have one) so you’d need to configure that key to be able to use pg_dump,
tar, and rsync.
I have a few things I do on servers that lock them down. I disable root
logins over SSH. I allow only key-based authentication over SSH. I do not
allow password-based authentication at all. The only way to get root is for
a lower-privileged account to get a shell via key-based authentication and
for that account to be in the sudoers file. Scripted attacks are probing
for well-known accounts, like root, and then attempting to guess passwords.
That will never work on my servers because not only is root never allowed
to login directly. Password authentication is disabled so the attempt will
fail. Even if the script can guess the name of an account on the server, it
still can never guess the password because password authentication is not
allowed. For example, for an attacker to get a root shell on your
example.com server, they would need to know which of the accounts on that
server is in the sudoers file. Let’s say they manage to guess that
“ppotter” is an account on that server. They attempt to do ”
email@example.com” and SSH will return a prompt for them to enter the
passphrase for your private key. For the attacker to gain access, they must
have two things, something only you have (hopefully), your private key, and
something you know, your passphrase. Unless they manage to steal the device
containing your private key AND managing to guess the passphrase or use
some means of cracking RSA encryption, which is likely out of the reach of
non-state actors, there is no chance they will get a remote shell on your
machine. At best, they will get “: Permission denied (publickey).” and if
your server is configured to either slow down or block failed
authentication attempts, the attacker will move on to easier targets. If
your private key is compromised by you inadvertently leaking it or someone
stealing a device that contains it, you can revoke access to the servers on
which you have the matching public key by removing that public key from
~/.ssh/authorized_keys and replacing it with a new key. If you had a lame
or no passphrase on your private key, I’d consider every server on which
you had your compromised key to be compromised. The prudent thing to do
would be to burn all the servers down and start over.
Burning down and rebuilding isn’t as difficult as it sounds if you have
good practices. If you are still building servers by pointing and clicking
and doing system administration in an ad hoc manner, your infrastructure is
not maintainable or repeatable. This is where the concept of infrastructure
as code, which in turn is kept under revision control, comes in. I have
legacy servers that I had spun up doing it the old-fashioned way – copying
a disk image and configuring in an ad hoc manner. Those servers are not
easy to repeat. Sure, I can take a disk snapshot periodically, and I do,
but that isn’t a substitute for having the state of my infrastructure
defined in code. That’s where tools like Packer <www.packer.io/>
and SaltStack <saltstack.com/> come in. I can take a base image,
like a Debian Linux minimal installation, create my own customized base
image using Packer that has all the things that are common to all Linux
server I will deploy, and then have SaltStack configure the Linux servers
that were created using Packer. I recently learned about Salt proxy
minions, which enable devices for which there aren’t Salt agents, to be
managed by SaltStack. An example would be a Cisco switch. The objective of
tools like SaltStack is to be able to spin up data centers with as little
human intervention as possible. This makes it feasible to have repeatable,