dot files

Posted on 2019-03-28 14:47 in Blog • Tagged with linux, osx

What is the difference between .bashrc and .bash_profile?

.bash_profile is executed for login shells, while .bashrc is executed for interactive non-login shells.

So, what's the difference between a login shell and an interactive non-login shell?

On most *nix systems, when you login to a terminal, .bash_profile is executed. This happens when you log into your machine, or logging in via SSH. Once logged in, new terminal windows that are opened are launched in "interactive non-login" mode and .bashrc is executed.

A noteable exception, OSX launches all new terminal instances as login shells.

Why different files for login vs non-login?

Perhaps you want to see some additional information when you first log into a box, in addition or instead of the default, "Last login" details. Such as current system load, the next four hours worth of calendar events, or custom ASCII art welcoming you to your machine.

What type of things do I define in my .bash* files?

Here are just a few of the components of my .bash* files. * PATH modifications * ALIAS commands * Convience Bash functions * GIT branch name in prompt with coloring * common macs for work * Ex: cd into application log file direction and begin tailing the most recent log.

Additionally, I have heard of people adding scheduled jobs to their profile that checks for package manager updates to their tool chain, and provide an alert letting the user know updates are avaible.

What if I have multiple machines?

What if you have multiple machines and you would like consistant terminal behavior between them. Simple, use the same configuration files for all the machines. Create a new GIT repo and store your shared config files there. Then, for each machine you use, simply update that machine's .bash* file to source in your shared config files.

Your config file could also automatically pull updates from your repo to ensure each machine is always up to date with your most recent config file.

Interesting Bits from my bashrc file

The following snippet gives me a customized bash prompt. Displays the username and hostname, followed by the current directory in purple, and the git branch I have checked out (when in a git repository) in red.

parse_git_branch() {
    git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
}

export PS1='\u@\h \e[35m\w \e[31m$(parse_git_branch)\e[0m$ '

Continue reading

Screen: A Linux Application Every Developer Should Know About

Posted on 2018-12-31 13:35 in Blog • Tagged with developer, linux

Screen: A Linux Application Every Developer Should Know About

In a previous life, at least it feels like a previous life, I found myself performing a fair amount of remote load testing. I would SSH into a box. Start the application under test, and then need to start any number of monitoring/ metric gathering scripts. Periodically, was the test was progressing, I would return to check on the status of those scripts to determine if there was an issue with the environments real-time.

The trouble I faced, if I left the command running in my terminal, and the SSH session was lost; then the script would be killed. If I launched the process in the background, then I ran the problem of identifying the name of the auto-generated log file that I needed to tail to check on the status. Enter screen.

Screen is a Linux based command line tool to stand up a virtual terminal with all kinds of wonderful features. As the man page describes….

Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows.

At its heart, it allows the developer to SSH into a machine and start a long lived terminal session. Should the SSH session fail, you simply reestablish the connection to the server and then reattach to the running screen session. Furthermore, a single terminal session could spawn several screen sessions, allowing for the concurrent running of several scripts that you want to periodically monitor.

Here is a break down of my most often used commands.

Command Description
screen Launch an un-named screen session.
screen –list Display a list of currently running screen sessions.
screen –S Launch a named screen session.
screen –r Attached to a running screen session. (only works if there is exactly one running session.)
screen –r Attach to a running screen session by name.

Once inside a screen session, there are a number of useful commands. However, in order to activate the command mode, you must type CNTL-A. Otherwise the command will be interpreted by the terminal within the screen session.

Command Description
CNTL+a → d detach, leaving the session running
exit Kills the running screen session.

Continue reading

A Day in the Life of a Product Owner

Posted on 2018-07-13 13:51 in Blog • Tagged with work, product owner

The day begans like any other. Wonder into the office around 9am. Greeting a few coworkers before settling into my desk. Opening up Gmail to check what fires need stamping out and taking a tour through the calendar to learn about the day’s meetings. Eight meetings today. More than I like to see, but nothing terrible. I update my to do list with any critical information I think I need to gather prior to the important meetings, and then grab my mug and make my way to the cafeteria.

I grab a cheese stick from the cooler and get in line at the coffee machine, making small talk and catching up as I wait for my mocaccino. I love the fancy coffee machines that work has. With caffeine in hand, I walk to my first stand-up. The Product Security team is small, with only three developers. They are embedded in the other Scrum teams to provide their expertise when necessary. The meeting passes without incident, so I make my way to second standup, the engine team.

The engine team is larger, with 8 dedicated developers and 3 quality assurance people. I learn about unexpected complexity for one of the important projects and that a key developer will be out next week, meaning I’ll need to re-examine story alignment and delivery time frames. I make a note of the two.

Off to my third stand-up. Thankfully this is the once a week meeting along all us product owners, so we can keep one another abreast of our project status. Mostly to track interdependencies and knowledge share. Next week’s release is being pushed a week to make space for a critical patch to an older version. Good news for me, extra buffer room so the testers will be less rushed.

After a brief half hour at my desk to catch up on emails, I head off to a story-authoring meeting. We met last week to draw up the story map, detailing the various ways the project personas would interact with the feature. Since then, I’d broken the map into pieces and wrapper stories around the different portions of functionality. Presenting to the group, we reviewed the requirements and scopes assigned to each story. Some past muster, are sized, and added to the backlog. Others are deemed too vague, too large in scope, or we are still waiting on final UI mocks, and are left in the authoring state. Overall, a good meeting, productive.

Lunch: Indian take out, can’t be beat

After lunch I run an uneventful chartering session for the next medium sized project for the team. There is a little contention defining the extent of the build out, and eventually everyone agrees on the phase 1 work and what would be put off until the next phase.

This is followed by two back-2-back, one-on-ones with my product managers. Who are both unhappy we are “behind” schedule. Once again I repeat the discussion of the sized work from developers on the story and the amount of calendar time required to build it. If they want it sooner, what part of the feature do they want cut… Oh, you still want everything? …

With my mid-afternoon coffee, I settle into my desk to pound out the story delta from the authoring meeting this morning, clean up the chartering document, and chat up the user experience people over instant messenger to get a rough idea on when their work will be done.

I review my updated to do list from the days meetings, schedule a follow up authoring meeting from this morning, a new story mapping meeting from the charter that I ran, and turned by attention to reviewing the project backlog for my team. One project has been fully passed onto testing, three are making good progress, and I update the state for one from backlog idea to “authoring” to mark the transition into the authoring state.

Last meeting of the day, QA hand-off. We start the meeting with my re-presenting the story, describing the need for the feature that was built and my criteria for validating the feature works as expected and the environments it needs to work in. The developer takes over, describes the new settings, how to access the feature, and provides a short five minute demo of the feature. They then cover a few edge cases they ran into that they’d like QA to take a closer look at. Successful hand-off, great end to a busy day.

I stop by my boss’s desk to chat for a few minutes and reflect on the days work. I then grab my backpack and head home. Glad to have the day’s work behind me, happy to know everything is progressing smoothly. At least as smoothly as one could expect a software project to go. I think to my self, being a Product Owner isn't so bad.


Continue reading

Thread Safety

Posted on 2018-04-27 17:18 in Blog • Tagged with concurency, java

While digging through the code to get answer to a question needed to proceed with a story authoring activity, I stumbled upon the following code. Most of the the class is omitted and the names of the routine/ structure simplified.

...

public void insertIntoMap(long key, Object value) {
    synchronized (readWriteLock) {
        this.map.put(key, value);
    }
}

...

public Map getMap() {
    synchronized (readWriteLock) {
        return this.map;
    }
}

...

The part that drew my attention was the getMap() method. Wait to acquire the lock, then return a reference to the object. The call of the method, when it access the object, is no longer protected by the readWriteLock. Therefore, we have the makings of a ConcurrentAccessException on our hands. There are two primary ways to resolve this issue.

Follow the copy on access pattern

public Map getMap() {
    synchronized (readWriteLock) {
        return this.map.clone();
    }
}

Eliminate the need for the read/ write lock

private static final ConcurrentHashMap<> map = new ConcurrentHashMap<Long, Object>();

...

public void insertIntoMap(long key, Object value) {
    this.map.put(key, value);
}

...

public Map getMap() {
    return this.map;
}

I'm a fan of the second approach. Using a data structure that has built in thread-safety reduces the likelihood the developer will forget to use a lock somewhere and introduce a concurrency problem.


Continue reading

Meandering Our Way Though the Microservice Architecture: Case Study

Posted on 2018-04-16 22:56 in Blog • Tagged with microservice

The Challenge

The CEO got up in front of the entire company and said, “Product XYZ is the future of this company; and we are going to ship V1 by the end of the year”. As the designs began to sure up, it became clear an authentication microservice was needed. The challenge was made.

“IcedPenguin, you shall lead the team that builds the Security Token Service (STS).”

Project Kick-Off

I chatted with the architecture team, the implementation team, and the product owner of the overall application and established the STS charter. We had three months to deliver a beta service (needed to start service integrations) and an additional month before we had to be hardened. This will be fun.

The Purpose of the STS

The primary responsibility of the STS was to handle authentication with our cloud product. This handled all web-traffic to the console. We replaced the existing weak cookie implementation with a JWT implementation and rev’d the core application to be able to understand the structure of the JWT, to perform authorization based only on the contents of the JWT instead of having to reach back to a centralized point (the power of digital signatures). Additionally, we designed and built a revocation mechanism so that the web-servers could be notified when a token had been revoked (as opposed to simply expiring).

The second main responsibility, and the primary need for Product XYZ, was to facilitate federated authentication for our end-points to a third party cloud service. The end-point would reach out to the STS to kick off the authentication processed. The STS would perform the handshake with the 3rd-party, and return to the end-point, the 3rd-party access token generated specifically for them.

Imgur

Infrastructure

The core service was built upon Spring Boot utilizing stateless REST APIs. The ability to scale horizontally was defined on day one. We would need it to scale, ensure high availability, and as seamless upgrade mechanism (bring up new nodes, get them stable, take down old nodes).

Imgur

Result

After four months of effort (plus testing), we accomplished our goal. We extracted the authentication code from our monolith and shipped it as a microservice. Additionally, we added the 3rd-party federated authentication functionality that was needed for Product ZYZ.

Microservice Discussion: I Thought Microservices Were Suppose to be Light Weight, Why Did it Take Several Months to Build?

Great question! The STS was the first microservice to be built by the company, so we were in uncharted waters. Along the way, we hand a lot of “overhead” tasks that had to be tackled and overcome:

  • Containerize the Sprint boot instance and dependencies for development
  • Build pipeline to auto-create the container
    • Run the unit tests
    • Publish artifacts to artifact repository
    • Integrate automatic running of tests with our pull-request review tool
  • Built a pipeline to deploy the node swarm to qa, staging, and production
  • Update automated test framework to run against containers
  • Management of configuration settings for development, QA, and production
  • Run reviews with corporate security department to approve the approach and architecture of the STS swarm.
  • Preliminary monitoring tools put in place

In addition to the product integrations: * Complete audit trail in place for user authentication actions * Updating clients to interact with the STS


Continue reading