From Shakespeare to Java

Life Begins Where Your Comfort Zone Ends.

Simplifying Dynamic UIs With React Router

Imagine building a React application where the displayed content depends upon some data coming back from an API call. When one of the navigation tabs is clicked at the top of the page, a very specific Component is rendered as the page’s content. This seems like an easy problem to solve, right?

For the sake of this example, let’s say our React app is to display content about buildings. There are many different types of buildings - homes, shops, skyscrapers, monuments, museums, etc. - obviously, the content for each of them is going to be different. If the API response returns back a property like buildingType, we could probably create a switch statement to return the correct component dynamically:

getComponent(buildingType) {
  switch(buildingType) {
    case "museum": return Museum;
    case "skyscraper": return Skyscraper;
    ...
    default: return DefaultBuilding;
  }
}

render() {
  const BuildingComponent = this.getComponent(this.state.buildingType);
  return <BuildingComponent />;
}

This solution is okay if we’re creating a really simple React application. But for an more complicated UI, this isn’t going to cut it. Maybe we need to set props on Museum and a totally different set of props on Skyscraper. Now what? Maybe you could convert each case into an object like so:

case "museum": return {
  comp: Museum,
  props: {
    this.state.building.museumType,
    this.state.building.museumRanking
  }
}

...

render() {
  const building = this.getComponent(this.state.buildingType);
  const BuildingComponent = building.comp;
  const buildingProps = building.props;
  return <BuildingComponent {...buildingProps} />;
}

Now just imagine having to do this for 10 types of buildings. That switch statement would become hundreds of lines in mere seconds! This may be the simplest solution, but it’s certainly ugly!

Enter React Router. React Router shows and hides content based on, you guessed it, routes. Usually, React Router is used for single page applications. You can show routes like /new, /edit, or /delete without actually having the entire application refresh. Instead, it simply shows the New component when the /new route is triggered.

Using React Router, we can simplify our code by using easy to understand Routes to dictate what Component and props we need.

First, we need a Router. There are many to choose from. BrowserRouter is what you’d expect - it updates the browser’s URL bar with a new route. It looks as if you’ve actually navigated to a new page, even though the app does not actually reload from scratch. If you’re building a React UI inside of another framework that already has routes, you obviously don’t want to actually update the browser’s URL bar. In that case, pick the MemoryRouter. Inside the Router, declare your routes like so:

render() {
  return (
    <MemoryRouter>
      <Route exact path="/house" component={House} />
      <Route exact path="/museum" render = { (routeProps) => <Museum {...routeProps} {...this.getMuseumProps()} />} />
    </MemoryRouter>
  )
}

I’ve provided two examples of how to declare a component in a Route. The first way is pretty straight forward. The House component does not need any props passed to it, so we can just use the component prop in Route and pass it the component’s name. The route for /museum is a little different. In this case, we use the render prop. You always need to pass down information about the current route state - that’s what routeProps is. But then you can append any other props that component might require. In this case, you can make small, manageable functions to return the props needed for specific components.

But wait. How do we actually switch routes? Link is the obvious component to use that comes built into React Router. You can look at their documentation for more information on that. In this example, we have some navigation tabs that calls an onClick method defined in the same component as our Router. How do we get the onClick method to change the route?

Step 1: Update the export default line-

import { withRouter } from 'react-router-dom';

class MainBuilding extends React.Component {
  ...
}
export default withRouter(MainBuilding);

Step 2: Change the route in the onClick method-

class MainBuilding extends React.Component {
  ...
  onTabClick(newBuildingType) {
    const newRoutePath = `/${newBuildingType}`;
    this.props.history.push(newRoutePath);
    this.setState({buildingType: newBuildingType});
  }
}

When we push the new route to history and update our state, our render() method is called and matches the new route. So, if the newBuildingType is “house”, then the component matching the route for /house will be displayed. Yes, you might have a lot of routes. But you can refactor this into still yet more manageable chunks, unlike a switch statement.

I hope this example helps you think about how to incorporate React Router into you app to better organize serving up specific components.

I Speak Developer

I remember the days before I was a software engineer and I lived in a world where Python was a snake and not a programming language, where Java was just another word for coffee, and Cassandra was a person from Greek mythology. Most people live in my former world. If you talk to non-technical people about what you do - I mean, what you REALLY DO, no holds barred, describing all the tools and languages and processes you use every day - they will think you are speaking Klingon.

One of my coworkers set up a meeting with me next week because they wanted to better understand git branching and how it affects QA and production deployments. It took me a few months on the job after graduation to truly understand git. Local vs. remote, pushing, pulling, merging, branching, commits - it’s confusing to most beginners. So, I feel like I really need to prepare for this meeting and think about how to explain aspects of git to someone on the outside. I thought that coming from a teaching background would help me describe to my non-technical coworkers about my daily challenges. Now I realize I am entrenched in tech-speak. I’ve become the very person I didn’t understand before starting my computer science degree.

How to Create Custom CSVs From jQuery Datatables

It’s official! I’ve been a software engineer who has been gainfully employed for more than 1 year. And so, to celebrate, this post will be a technical one! Where I work, there’s a scenario that pops up quite often:

We’d like to be able to export the data in a table located on x page of our application as a CSV file.

It sounds simple enough. We use jQuery Datatables to generate most of the tables in our application. And of course, Datatables actually provides the ability to save the table as a CSV or another file type. But unfortunately, there are some problems with using Datatables’ built in CSV export: 1. Pagination: If you dynamically fill each page with data by making new ajax calls, then the CSV export will only export the active page, not all data. 2. You might want to include some data that’s not displayed in the datatable, such as hidden values, or those used as part of a link for a row.

This is when you need to generate a custom CSV from your datatable. Here’s the process:

  • Create an export button near your datatable. Simple enough.
  • Next, you need to get the raw data. There are 2 ways to do this, depending on your situation. If there’s no pagination and all of your data is contained in the datatable, you can simply do:
var dt = $("#idOfMyDatatable").DataTable();
var data = dt.rows().data();

If you’re using pagination and need to retrieve all data, you can make an ajax request to your backend, probably to the same function you use to populate your datatable in the first place, but you can update it to retrieve all data by setting the iDisplayLength value to the recordsDisplay value. That’s the total number of values in the table (The 100 in ‘Viewing 1-10 of 100’):

var dt = $("#idOfMyDatatable").DataTable();
dt.on('init.dt', function(e) {
  var params = table.ajax.params();
  var iDisplayLength = dt.page.info().recordsDisplay;
  $.ajax({
    url: "/getDTJson/?iColumns=" + params.iColumns + "&iDisplayStart=0&iDisplayLength=" + iDisplayLength + "&sSearch=" + params.sSearch,
    type: 'GET',
    success: function(data) {
      //This is our next step - working with the data. For now, you've retrieved it!
    }
    });
});
  • So, once we have our data, we can begin to craft our CSV file. We’ll need to set up our basics like the headers and some basic file information. We’ll also need to create an array, where each value is a line in the CSV.
var headings = ["Heading1", "Heading2", "Heading3"];
var lineArray = [];
lineArray.push("data:application/csv;charset=utf-8," + headings);
  • Adding your actual data depends on how you retrieved your data in step 2.

    • If you’re getting your data from the datatable itself, ie: var data = dt.rows().data();, your data will look a little something like
    {"0": ["value1", "value2", "value3", "value4"], "1":["value1", "value2", "value3", "value4"]}
    

    You will need to loop through each array in the data object, pulling out only the values you want and storing those in a new array. If those values could possible contain a comma, make sure you escape them. Finally, join all values in your array into a string using a comma separator and store that string in your line array:

    for(var i = 0; i < data.length; i++) {
      var lineWithSelectedValues = [];
      lineWithSelectedValues.push("\"" + data[i][0] + "\"");
      lineWithSelectedValues.push(data[i][2]);
      lineWithSelectedValues.push("\"" + data[i][3] + "\"");
      var csvLine = lineWithSelectedValues.join(",");
      lineArray.push(csvLine);
    }
    
    • If you had to make a new ajax call to retrieve your data, your data will be in a two-dimensional array under the key aaData. Your data will look something like {"aaData": [["value1", "value2", "value3", "value4"],["value1", "value2", "value3", "value4"]]} You will need to loop through each nested array and extract only the values you want. Usually, I do this via a separate function, but for ease, I’ll combine it into my code sample:
      for (var i = 0; i < data.aaData.length; i++) {
        var dataArray = data.aaData[i];
        var lineWithSelectedValues = [];
        lineWithSelectedValues.push("\"" + dataArray[0] + "\"");
        lineWithSelectedValues.push(dataArray[2]);
        lineWithSelectedValues.push("\"" + dataArray[3] + "\"");
        var csvLine = lineWithSelectedValues.join(",");
        lineArray.push(csvLine);
      }
    
  • Our last step is to finalize our CSV content and hook up our export button to actually provide the ability to download the CSV. No matter how you got your data, you should have an array var lineArray that contains comma separated strings. We need to convert that to being just one string, with each string-row ending in a new line.

var csvContent = lineArray.join("\n");
var fileName = 'MyDatatbleCsv';
$("#exportBtn").attr({
  'href': encodeURI(csvContent),
  'download': fileName + '.csv',
  'target': '_blank'
  });

And there you have it! This is how I create a custom CSV from a datatable. I hope this helps you in your endeavors!

The Right Stuff

Building applications requires time, thoughtful planning, and the right tools to complete the task. There are so many programming languages and frameworks to choose from that sometimes I think people are overwhelmed by the “possibilities.”

At work, we use Scala and Play Framework. Normally, those who come from a Java background are supposed to be amazed by the versatility of Scala. I come from a Java background. I am not impressed. I don’t hate Scala, but it’s a struggle to use because it is poorly documented. The massive amount of collections, both mutable and immutable, is not a feature. I’m constantly using the REPL to determine how to add, delete, and retrieve values from the whole host of Scala collections. It’s not intuitive. Scala is like an elitist programming language. It’s like its creators decided, “Oh, you’d like to learn Scala? That’s great! Best of luck to you!” and then idly stand by and laugh as you try to make sense of it all. I even own a book on Scala and even it is ambiguous at best on many “features” of Scala.

Now on to Play Framework, which is even worse than Scala itself. I’m pretty much a pro at application frameworks. I’ve used Ruby on Rails, Dropwizard (Java), Django (Python), Node.js, and probably a few more that I’m forgetting right now. I’ll admit that Play’s backend functionality…you know, models and controllers, are rather straight forward. If you actually know Scala or Java (though Play seems to discourage using Java), you’re all set. But when it comes to writing a front-end? Good luck!

I spent almost 2 weeks on building a view that should have taken 2 days. I wanted a rather simple form and Play’s Form generation seemed like the way to go. It has validation on the backend, serializes to a model, and all that good stuff. Unfortunately, many use cases just do not conform to Play’s idea of a form. I wanted radio button displayed vertically. Play said NO. I wanted radio buttons that looked like buttons (no circle to click on…instead, the button background should change when clicked). Play has no means of doing that. It says you can create your own inputs, but that’s in the singular and obviously, I wanted multiple radio buttons. I asked for help on StackOverflow and even in the Play Framework Google group. CRICKETS. If you use this framework, you are on your own to figure out how to use it. The documentation lacks full descriptions of how to do basic things and there’s no community for help.

In the end, I was put out my misery and wrote the front-end in pure HTML with some jQuery and Parsley for validation. And it took me barely a few hours to get everything working! Just because something is there doesn’t mean it’s the right tool. Sometimes, if you know something is just easier to work with, saves time, and the result will be easier to maintain in the long run, that’s what you should choose.

What It Means to Be in Tech

Many people claim to be ‘in tech.’ There is a prestigious connotation about it. It means being at the cutting edge of something great, being in demand and desirable, and being well-off among other positives associated with the tech industry. The tech industry continues to boom, with USA Today reporting that 5 major tech companies hold a third of all US cash, not financial firms! No wonder people want to be associated with tech!

My definition of someone in tech is narrow. I believe it is a person who has a technical background( science, technology, engineering, and mathematics) who directly uses that education and experience in their everyday responsibilities. In other words, if they have an engineering education, they should actually be doing engineering work.

Needless to say, I am absolutely frustrated when people, especially women, are labeled as ‘women in tech.’ I understand that people are trying to make tech more appealing to women by showing successful women in the field. But articles like Elle Magazine’s Most Influential Women in Technology are misleading. Only 2 of the individuals listed actually have some education in a STEM field. Most are entrepreneurs or occupy some upper management role. They are not doing anything technical! Therefore, I argue they are not in tech! It’s like saying the janitor at a local high school is in education. They’re not!

This far extends beyond labeling women as being in the tech industry, though. There are plenty of people labeling themselves as being part of the tech industry on LinkedIn, but take a look at their education and it’s usually some business or finance degree. Just because they may work for a tech company does not mean they are ‘in tech.’

It’s time to use this label only for those who actually do true technical work! By allowing those without a technical degree and role to assume the label of ‘in tech’ degrades what it means to be an engineer or a scientist. For women, it is even more detrimental because the aspirational models set before them are still not true women in STEM roles. It sends the message, “Yes, you can be in tech and no, you don’t have to major in engineering or science to do it!” So, be wary the next time someone claims to be ‘in tech.’

A Change in Scenery

I started a new job in the beginning of April. My first job as a software engineer, though, will always have a special spot in my heart. I accepted my first software engineering role because a) I’d be working for a well-known company with a prominent reputation in the tech space and b) I really needed some hands-on experience. My only experience prior to my first job was a summer internship at a start-up, where I really had no clue what the heck was going on. “What’s an API?” Haha! There’s quite a gap between learning software development in a university classroom and actually building applications for a business.

My First Job

My first job served its purpose. I am very proud of the projects I completed, the technologies I learned on the job, and the social skills I acquired in order to deploy my work. Seriously, social skills in the office are highly underrated! I leaned on my co-workers a lot so I could understand how to do Jenkins deployments, merging changes in Github, and running applications in Docker containers. My boss was very hands-off. He generally told me what to build and it was up to me to build it. I was a very competent developer, though. UPenn and my true mentor (my husband) prepared me well and I made short work of most of my projects! After a month or so into the job, my boss trusted me enough to allow me to select which technologies to use for my projects. I became an AngularJS master! I was really grateful that I was able to do full-stack development there. Some people like front-end dev, others like back-end, but I’ve never wanted to choose between the two. I like the challenges of both.

Honest Job Searching

If your first job is your dream job, you’re probably very, very lucky! While I liked my first job, there were some hard truths that I’d come to realize as I sat in my cubicle. What makes a fulfilling career? I wanted my next job to make me really happy to come to work. I didn’t want just a paycheck. I’m giving up 40+ hours of my life every week and I wanted those 40+ hours to count for something other than dollars. My first job was really just a job. I wasn’t unhappy, but I wasn’t exactly joyful either. There was a slowdown of projects and I was bored. I knew that I wanted my next job to have lots of work to keep me busy and engaged! I also wanted the option to work from home. I really hated that office. I had a terribly uncomfortable chair, noisy cubicle neighbors, and I felt alone in the cubicle maze. Software development is a pretty sweet career - it CAN be done at home. And I wanted to be in the comfort of my home some days, sitting on my comfortable couch in the peaceful quiet thoughtfully writing code. When I earnestly started my job search, I told my recruiter that I was looking for a full-stack position with the ability to work from home. And she found a great company for me!

My New Job

I can’t say enough good things about where I am right now! I think I’m the happiest I’ve ever been! I truly LOVE my new job! The people I work with are really fun people. There’s a nice culture in the office where people say hello to one another in the morning, chat casually in Slack, and go out to lunch together! It’s an open office, which I thought I would come to hate, but it’s actually grown on me. I truly feel like I’m part of the team (♪♪ everything is awesome…. ♪♪). The product I’m improving and building out is really interesting and has a lot going for it! I’m definitely challenged by the large codebase and I celebrate every time I fix a bug or make an improvement. Raising my hands in victory a few times a week feels very, very good! I work from home some days and that’s really improved my quality of life. I can really get in the zone at home. There’s no need for putting on headphones with white noise playing in the background. And it’s also nice to ‘work’ with my favorite coding partner: my dog!

Final Thoughts

If you’re just starting out in software engineering, don’t be discouraged by being told you don’t have enough experience. An opportunity will come your way and you’ll make the best of it. When you’re ready to move on, be honest about what you want. It is achievable!

The Kinesis FreeStyle2

kinesis keyboard

So, how’s my new Kinesis FreeStyle2 keyboard? Is it as ergonomical as I need it to be relieve my RSI symptoms? Would it work well for you? Read on!

Day 1

I started my work day setting up the FreeStyle2 with the the VIP3 accessory kit. It took about 10 minutes, because I actually read directions :-P The wrist rests and the lifts snapped in nice and easy! I decided to use the largest angle, 15 degrees, for the lift. The more vertical it is, the more like that handshake position that’s supposed to be ideal.

Benefits of a Split Keyboard

There are so many benefits of having a split keyboard. For one, my arms were so much more relaxed. Not only did it help my arms, but my back as well!

One surprising benefit of the split keyboard: the space in the middle! I was able to put my steno pad front and center! No longer was it off to the side so I’d have to keep pivoting to look at my notes!

Hotkey Blunder

One of the things I was really looking forward having on the FreeStyle2 were the special keys on the left key panel. There are a variety of hotkeys including copy, paste, and select all. No more Command+C! So, as I was working, I found myself in situations that required copying and pasting text. Time to use the hotkeys! So, I highlighted some text, pressed the Copy hotkey….and heard a “ERRRR” noise out of my headphones. I started pressing a variety of hotkeys and each one did not do as it was expected. One of the keys put Google Chrome in background mode (who knew Chome had such a thing as background mode?) and the others force quit some of my applications! YIKES!

I searched online, but no one reported issues with this keyboard’s hotkeys. That could only mean one thing: Dvorak. I switched over to the Dvorak-Qwerty setting on my Mac and TA DA, the hotkeys worked!!!

Week 1+

Well, it’s been one week of using my Kinesis keyboard. Some things have definitely changed since Day 1. Let’s review:

Goodbye, Hotkeys

The hotkeys are still buggy. Really buggy. After switching to Dvorak-Qwerty, they worked, but only in certain applications. They didn’t work in my main code editor, Eclipse. The applications where they did work made other Command+keys stopped working––one of which was the all important Command+S (Save). In my Atom editor, I apparently re-assigned keypresses by pressing Command+S. OOPS! Needless to say, I decided to go back to regular Dvorak, making the hotkeys useless again. At least I can actually save my work now.

Tilt Good? Tilt Bad?

I can’t decide whether the 15 degree tilt is comfortable or not. I’ve been experimenting now with the 10 degree tilt. I’m not sure whether it’s actually beneficial. The jury is still out on this one.

Where Should My Mouse Go?

I think this is the most fundamentally troubling question I’ve been trying to answer this past week. The split keyboard makes my arms nice and comfortable, but only when typing. But my mouse is farther away, meaning I need to lean over to do all my clicking, moving of arrows, and scrolling. After a day or two of my new configuration, my right shoulder was really uncomfortable.

For a few days, I tried putting my mouse in the middle, between the two keyboard pads. But this is sadly also uncomfortable. It kind of negates the reason why I wanted a split keyboard in the first place…so my arms would not be at some unnatural angle in front of my body.

Lately, I’ve tried doing as much as I can via the keyboard. I’ve started learning the keypresses needed to toggle between tabs and applications. I’ve used the page-down keys more for scrolling down on webpages. It’s better than constantly reaching for the mouse. But the mouse is still essential and it’s hard to know what else I can do to make using it more comfortable.

Final Thoughts

I’m still very happy with my Kinesis FreeStyle2 purchase.

PROS:

  • It really has helped a lot with my wrist, arm, shoulder and finger pain.
  • That space in between the keypads is great for a notebook!
  • There’s no learning curve to use it. It’s a regular keyboard, just in a split configuration!

CONS:

  • Your mouse will be further away due to the split. There’s probably no place to put your mouse in a comfortable position. You’ll need to try learning more key shortcuts to do things you’d normally do with your mouse so you don’t need to reach for your mouse as much.
  • If you type in Dvorak, the hotkeys are useless.

Thanks for reading!

Time for a New Keyboard

kinesis keyboard

I’ve been thinking a lot about ergonomics lately. A few weeks ago, I thought I was dreaming as I looked down and saw that my little finger’s bone had somehow managed to overlap my ring finger’s bone. My hand looked like a claw! I had to use my other hand to set my finger bones back in shape.

I knew I had to do something. I love software engineering, but let’s face it––it’s sort of a crippling job. Sitting all day: Not good. Typing/mousing all day: Not good.

I’ve written before about how switching to Dvorak helped alleviate my RSI. But clearly, it was back. A simple search discovered that the Apple Magic Bluetooth keyboard is absolutely AWFUL for RSI. Time for a new keyboard…

Ye Old Keyboard

First, I brought my old Logitech keyboard to work. This poor keyboard has been sitting collecting dust in front of my desktop (which was last turned on…hmm…3 years ago?). Right away, there was a problem––the number pad on the right. I felt like my mouse was so far away and my right arm was forced to be at a 60 degree angle from my body to type. Awkward!

The Kinesis Classic QD

Thankfully, a friend allowed me to borrow his Kinesis Classic QD keyboard (pictured above) to get a feel for it. He swears by it after suffering himself from RSI and has been using it for years! This is probably one of the strangest keyboards I’ve ever seen, but it has a lot going for it. The wrist pads are perfectly positioned. The inset wells for the keys puts your fingers at a natural downward tilt. And what I loved most of all––the division between left and right puts your arms perpendicular to your body, which is way more comfortable than having them angled to the middle of your body! It also has the keys you probably press the most pushed to the thumb regions. The keys can be remapped in whatever configuration you prefer.

After trying the Kinesis Classic (newer versions are called the Advantage) for a week, it was clear what I loved and hated about this keyboard. And what I hated was the key configurations. I have tiny hands, so using any of the thumb buttons was actually putting a lot of strain on my wrists to twist and reach those buttons. Even though I could remap keys, it was really difficult getting used to pressing keys that I was used to being placed in areas of the keyboard that no longer existed in the Kinesis key layout. Yet, this trial run was enough to convince me that Kinesis was on to something…

The Kinesis FreeStyle2

I’ll write a post about my experiences with this keyboard once I get it later this week. However, I think the Kinesis FreeStyle2 has everything I loved about the Classic without all the things I disliked. I bought the 20” extension for that great left/right arm separation. I also purchased the optional VIP3 accessory package that allows you to vertically tilt the keyboard. Supposedly, the “handshake” position is better for your wrists and arms. If you have pain in your lower arm, this might be great for you. The VIP3 also comes with Kinesis’ awesome wrist pads (seriously, they’re worth raving about!).

Stay Tuned For More On My Journey into Being More Ergonomical!

Interactive SVGs in AngularJS

Dvorak keyboard

My latest project has been utilizing my art degree (wait, what? An art degree is useful as a software engineer? Surprisingly, YES). Using Inkscape, I created a very customized keyboard as an SVG (Scalable Vector Graphic). There were two things I had to do with this file:

  1. Embed the SVG into a view inside my AngularJS application.
  2. Each key element/node needed an ng-click attribute so when it was pressed, something would happen. Because, you know, what good is a key if you can’t click on it?

As I discovered, these two tasks are a lot harder than they should be. Putting an SVG into a view is as simple as making the file path the data of an <object> in the HTML. You can add an ng-click to the <object>, but that only works if you want the entire SVG to be clickable. I wanted interactivity inside the SVG itself. Adding ng-click as an attribute within a node in the SVG file will NOT work! To do what I wanted, I neede to use directives to embed the SVG in the view and attach a ng-click attribute to each of my keys.

I found this site that had an interactive map and thought, “Hey, that’s kind of similar to what I want to do!” So, using their insights, I was able to make a clickable keyboard within my AngularJS application. Here’s how I did it:

Add A Class and Unique ID to SVG Nodes

My keys were mainly circle elements with some text on top. I flattened my SVG to one layer to ensure that when a user clicked anywhere inside the circle element, it would trigger a click event. I added a class to each circle element and also updated the id of each circle element to reference its value, like so:

<circle
    r="43"
    cy="569"
    cx="227"
    style="fill:#4d4d4d;stroke:#000000;stroke-width:3"
    id="btn_MENU"
    class="boardKeys" />

Make Two Directives

I just put both directives in the same file: keyDirectives.js.

Directive One: Embed the SVG.

var myApp = angular.module('KeyboardAppModule');

myApp.directive('svgKeys', ['$compile', function($compile) {
    return {
        restrict: 'A',
        templateUrl: 'images/keyboard.svg',
        link: function(scope, element, attrs) {
            var keys = element[0].querySelectorAll('.boardKeys');
            angular.forEach(keys, function(path, key) {
            var myKey = angular.element(path);
            myKey.attr("myKey", "");
            $compile(myKey)(scope);
            })
        }
    }
}]);

Notice these details: var keys is an array of all elements with the class boardKeys…the same class I attached to each circle element (key) inside my SVG. For each of these circles, the directive adds the attribute myKey which is really the name of Directive Two. Directive Two is then initiated. But first, since I now have the directive that places the SVG into the view, take a look at the view’s HTML:

    <div class="row">
        <div class="col-sm-8 col-sm-offset-2">
            <div svgKeys></div>
        </div>
    </div>

See how I just threw that directive in there? That’ll just embed the SVG in that div! Woohoo! But wait, I still need to register clicking a key.

Directive Two: Add ng-click to all keys.

myApp.directive('myKey', ['$compile', function($compile) {
    return {
        restrict: 'A',
        scope: true,
        link: function (scope, element, attrs) {
            scope.elementId = element.attr("id");
            scope.keyClick = function() {
                scope.pressKey(scope.elementId);
            };
            element.attr("ng-click", "keyClick()");
            element.removeAttr("myKey");
            $compile(element)(scope);
        }
    }
}]);

So, Directive One added the myKey directive to each key, which then calls Directive Two, which adds the ng-click attribute to each key. For me, I used a function called pressKey() in my controller. It’s accessible via scope and I felt that my controller is where someone would look to see what happens when a button is clicked. You could just as well define what should happen when a key is clicked here in the directive. Also notice that I pass scope.elementId to my pressKey function. scope.elementId is the key’s ID! So, now I know which key was clicked and can handle the case as needed!

FIN

It obviously doesn’t take a lot of code to make an SVG file’s elements interactive in Angular, but it’s certainly not intuitive. Nonetheless, it’s great that this code works and I’m excited for my next Angular application with interactive SVGs!

DropWizard Metrics 101

metrics art

IO Graph Art by nathanmac87

Post revised 11/23/2015 for correctness. Thank you to Jan-Olav Eide and Tim Bart for leading me in the right direction!

My most recent project at work has been to utilize DropWizard metrics to gather information about an application I built and send that data to Graphite to display in a realtime dashboard. As I normally do, I reviewed the documentation for Dropwizard Metrics, but as usual, there was a lot to be desired. Their example did not go into the nitty gritty of how to connect everything together. The internet was also silent on how to actually use DropWizard metrics.

So here is my tutorial on how I was able to get DropWizard metrics up and running in my application:

1. Maven Application? Update pom.xml

If you’ve got a Maven application, you’ll need to add metrics-core as a dependency. If you’re using some sort of special reporter like Graphite, you may need to add additional dependencies. This was what I had to add to my pom:

<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-core</artifactId>
    <version>3.1.0</version>
</dependency>
<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-graphite</artifactId>
    <version>3.1.0</version>
</dependency>
<dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-metrics-graphite</artifactId>
    <version>0.9.1</version>
</dependency>

2. Start Your Metrics On Application Startup

Supposedly, there are two ways you can initialize your metrics reporters. A reporter sends your data to either the console, logger, Graphite, or other DropWizard-approved outputs. You can either specify them in your config.yml file, as seen here. This is supposed to start reporting your metrics automatically when you start your application. I tried this method, but I had zero luck with it. I ended up getting some parsing error from within DropWizard’s validation library.

So, I initialized my metrics an alternative way– in the run() method of my application class. DropWizard’s environment already creates a MetricRegistry object on startup. This object manages all of our metrics (these are timers, counters, meters, histograms, and gauges). Since I added metrics to my resources class, I needed to pass that MetricRegistry object to my resource class. This is how my application class looked:

public class MyApplication extends Application<MyConfiguration>{

    public static void main(String[] args) throws Exception {
        new MyApplication().run(args);
    }

    @Override
    public void run(MyConfiguration config, Environment env) throws Exception {

        final Graphite graphite = new Graphite(new InetSocketAddress("my.graphite.host", 2003));

        final GraphiteReporter reporter = GraphiteReporter.forRegistry(env.metrics())
            .prefixedWith("upToYou")
            .convertRatesTo(TimeUnit.SECONDS)
            .convertDurationsTo(TimeUnit.MILLISECONDS)
            .filter(MetricFilter.ALL)
            .build(graphite);
        reporter.start(5, TimeUnit.SECONDS);
        environment.jersey().register(new MyResource(env.metrics()));
    }
}

Some things to note: The prefixedWith is not required. What this does is append a label in front of our metrics names. I’ll explain this more in Step 3.

3. Add Metrics To Your Resources

I wanted metrics like ‘How many requests are we getting at /path1’ and ‘How long is it taking for the method handling /path1 to return a response’. Chances are you are, too. In order to do this, we’ll need to add metrics to our resources. And those metrics need to be “saved” in the MetricRegistry from Step 2. So we’ll need to create a constructor for our resource that will take in the MetricRegistry object we passed in Step 2.

There are two ways to add metrics to your resources: using annotations and by actually creating objects of a metrics class. I’ll give you an example of both:

public class MyResource {
     MetricRegistry metrics;
     Meter requestCount;

     public MyResource(MetricRegistry registry) {
        this.metrics = registry;
        requestCount = metrics.meter(MetricRegister.name("requestCount"));
     }

     @POST
     @Path("/path1")
     @Produces(MediaType.TEXT_PLAIN)
     @Timed(absolute=true, name="requestRuntime")
     public Response handleRequest() {
        requestCount.mark();
        return Response.ok("Hello World").build(); //you know what I mean...
     }
}

Notice these details:

  • We declare the meter requestCount, which will give us the total count of all calls made to /path1 and how many requests /path1 gets in 1 minute, 5 minute, and 15 minute timeframes. We initialized this Meter object in our Resource constructor, It’s initialization uses our MetricRegistry so it can manage it accordingly.
  • See the @Timed annotation? DropWizard will automatically create a timer and whenever handleRequest() is called, the timer will record how long it takes to complete this method. We do not need to initialize a timer object ourselves. We can name this timer. I’ve added the absolute=true field, because otherwise, the entire path of this metric in Graphite will be my prefix + package path + class + requestRuntime. Too long! By setting absolute to true, the metric path is just the prefix + the metric name.
  • Remember in Step 2 when we saw that .prefixedWith("upToYou")? Basically, our reporter, whether Graphite or the console or something else, will refer to this metric by the prefix + name of the metric. Graphite showed me options to graph like, “upToYou.requestCount.count” and “upToYou.requestCount.mean” (the ‘count’ and ‘mean’ part are automatically attached by DropWizard metrics). You don’t need any prefix, but if you want it, go for it back in Step 2.
  • Finally, we call our meter’s mark() method. That tells the metric, “Hey, the method we’re tracking has been called! Increment the meter count!” The mark() method is for the meter metric, though. Please refer to the DropWizard Metrics documentation on how to initialize and call counters, histograms, and other metrics. Now, here we’ve programmatically called the meter’s method. We don’t have to do that. We could just use the annotation @Metered(absolute=true, name="myMeter") and DropWizard would automatically call mark() for us whenever our method was called. But this is how you would do it if you wanted additional control…or multiple meters on a single method.

4. Profit

Now, when we start up our application, our metrics will be monitoring what we want and sending data to the console or Graphite or wherever!

So, there you have it! This is how everything ties together in DropWizard Metrics! And I’m the first person to actually write an example of this! Woohoo! I hope you have enjoyed this post and that it’s given you a decent amount of confidence to use DropWizard metrics in your application! Happy Coding!

If you really like this, tell me on Twitter @LBeckerCodes !