Mar 29

In January I joined C.C. Pace Systems, Inc. as Director, Enterprise Solutions, Federal Practice. I am extremely excited to be leading the effort to bring our Agile software development experience to our government clients. There are some incredibly interesting projects going on within the government agencies and I believe in this time of budget constraints and the need to deliver working software, CC Pace is the right company for the job. As a company and personally, we have been developing software with Agile methodologies (XP, Scrum, Kanban) for about 13 years. We’ve trained, coached, and developed Agile in multiple industries and are steadily building our Federal client list.

With the US CIO’s 25 POINT IMPLEMENTATION PLAN TO REFORM FEDERAL INFORMATION TECHNOLOGY MANAGEMENT paper and the OMB’s Contracting Guidance to Support Modular Development paper both emphasizing the need for the Federal government to move to Agile software development, there is a sea change in how agencies run their projects and how they contract. With experienced practicioners in both project management (Scrum) and Agile engineering (XP), I am excited about CC Pace contributing.

Please reach out to me if you can use my help.

Tagged with:
Dec 06

Tim Drury of Bright Green Projects recently penned a post about A Very Long (Agile) Engagement. In it he makes some excellent points about why it is in a consultant’s interest to secure very long client engagements, thus breaking the sell-deliver-signoff cycle of short consulting engagements. I encourage you to read it. One point that I found intriguing is the idea of the Agile Agents. These are client employees who are searching for new requirements based on the agile model of the business process. Then they rely on the consultants to quickly implement the new requirements which opens up the opportunity for the Agile Agents to discover more improvements to the business.

It is an interesting idea; although, I am not sure Tim has convinced us of the need for the consultants to action the backlog. Based on my own agile project experience, I believe it is based on their ability to deliver code in an agile environment. Thus, making it a win-win situation for both the client and the consultant. Perhaps Tim will have a follow-up post to detail that part.

In turning the multi-client sell-deliver-signoff cycle into a single client sell-deliver cycle, one aspect of Agile which can help turn those employees into Agile Agents is that by being on the same team and interacting daily with the consulting developers, the employees may discover new business requirements faster. For example, new ideas for features or new products can be put forward during the sprints that may never be discovered otherwise. These ideas that occur “in the moment” are often forgotten otherwise and never make it out of the team in a legacy methodology.

Have you seen any businesses who have changed employee roles to take advantage of the Agile Development Methodology? If so, I’d love to hear about it.

Tagged with:
Dec 05

This week in the Washington, DC area there is an interesting event occurring. FedScoop’s 2nd Annual Cloud Shoot Out is happening this week and will feature discussion of the key issues around cloud computing by C-level Industry leaders. Also, leaders from federal agencies will discuss how they are implementing cloud computing solutions. I find cloud computing to be a fascinating topic and have experience implementing various applications in the cloud via services such as Amazon Web Services (AWS), Heroku, etc. I think the benefits are definitely worth exploring.

Some of the confirmed speakers include:

Dave McClure, Associate Administrator, Office of Citizen Services & Innovative Technologies, GSA
Jeff Bergeron, US Public Sector CTO, HP
John Bordwine, CTO of Public Sector, Symantec Corporation
Mark Day, Chief Scientist, Riverbed
Jeff Casazza, Director of Security Technology, Intel
Carl Moses, Senior Manager, Amazon Web Services Security
Kevin Paschuck, VP Public Sector, RightNow Technologies
Susie Adams, CTO, Microsoft Federal
Sonny Bhagowalia, Deputy Associate Administrator, Office of Citizen Services & Innovative Technologies, GSA
Chris Kemp, CTO of IT, NASA
Peter Tseronis, Senior Advisor, Department of Energy

I think the discussions of how to transition current systems to the cloud and security in the cloud will be particularly interesting.

More information is available on the event site at http://fedscoop.com/events/fedscoop-2nd-annual-cloud-shoot-out/.

Where:

The Newseum
555 Pennsylvania Ave NW
7th Floor

When:

8:00 am – 11:45 am Thursday, December 9th

If you get to attend please let me know what you found most interesting. I unfortunately won’t be in attendance this year.

Tagged with:
Oct 14

It can be common for developers to be involved in the creation and design of databases for an application they are developing. Since most developers aren’t trained as Database Administrators, it is important that they develop some sound approaches to writing SQL and Data Definition Language (DDL). I’ve previously reviewed Database in Depth: Relational Theory for Practitioners by C. J. Date in the post, Book Review: Database in Depth. I highly recommend getting it as a guide to best practices for any relational database work.

Singular or Plural

One of the choices that invariably occurs is what to use for table names. Sometimes the particular RDMS may restrict the choices, but often this is pretty wide open. What I’m interested in is the small decision of whether to use the singular or plural form of a noun for the name. A common example is the table where user data is held. Should the table be named user or users?

My Choice

I am not sure there is a standard answer. This has been discussed before (for example, see Database table naming conventions), but I haven’t seen anyone justify why it is better to make the choice for theoretical reasons. Some frameworks or DDL tools will make the choice for you as I discuss below for Ruby on Rails. My own choice has been to use the singular name. I think this leads to improved code quality because:

  1. It leads to consistency in Foreign key names (order.user_ID references user.ID).
  2. It makes ORM a 1:1 match (the user object reflects a row in the user table).
  3. It avoids some of the annoyance of the English language pluralization rules (see Rails below).
  4. Then there are small things like ordering of tables. For example, user, users, userOrder, and user_line_order may not sort in the most effective way.
  5. I believe it is the simplest design that works. Using the plural form makes me think there is a singular table from which the plural form is distinguished. Would that be a table which is guaranteed to only have a single row?

Rails makes a default choice?

So what happens when you use the command script/generator scaffold user in a Rails project. Rails generates a DB migration for creating a users table. There are ways to control the model names and DB migration, but it is interesting that the developers of Rails decided to make the default choice of using plural table names. (See Agile Web Development with Rails for example syntax on how to not use defaults).

But what about that tricky English language? Well, script/generate scaffold person produces people. Pretty cool. And woman produces women. But then goose becomes gooses. Woops! Now you may have no need for a goose object in your application, but I am sure there are other examples like this. Alas, this appears to be a bug in the Rails framework, but just think how easy it is for you to bump into this yourself when trying to remember the plural form of the object you are working with. In addition, these names show up in the URL for Rails routing. Thus they can become public. I’d rather see geese in applications I work on.

I realize there are pros and cons to the choice of using singular or plural names. In terms of producing quality code, what do you use and why?

Tagged with:
Sep 23

On September 29th at 6:30pm in Washington, DC the 10th meeting of the TECH Cocktail DC is taking place at Slaviya. I’ve purchased my ticket and hope to meet some of you there. If you are going let me know and I will keep an eye out for you.

The Tech Cocktail meetings bring together technology entrepreneurs, VCs, bloggers, and skilled technologists (web developers, DBAs, social media experts, etc.) from around the country. There are product demos and great networking discussions throughout the night. For more information check out http://techcocktail.com/tech-cocktail-dc-fall-2010-startup-mixer-presented-by-palantir-2010-09.

Tagged with:
Sep 23

On October 12, 2010 the FedTalks conference is taking place at the Sidney Harman Hall in Washington, DC. I’ve been invited to attend and am excited to be a participant. If you’ve never attended, this conference features keynotes by Federal government technology executives as well as executives in commercial and non-profit entities. Noted speakers include David Dejewski, Arianna Huffington, Chris Kemp and others. The conference theme is about improving government with technology. Please check out http://www.fedtalks.com/ for complete information.

It looks like @fedtalks has already established #fedtalks as the Twitter hashtag for the event. Look for my tweets on that day as well. If you plan to attend, let me know if you want to meetup to discuss technology and DC.

FedTalks

Tagged with:
Sep 20

How come you should not deploy software on a Friday? I’ve been pondering deployment issues and looking into best practices for when to time software releases (time of day, day of week). I am not considering how often to release (I think short iterations are best). It seems to me that the true answer is always highly dependent on individual organizations. Below are some points to consider which may make a good case against releasing web applications on a Friday. Again they may not apply to your organization, but you might consider them for other times/days also.

  1. In some methodologies, the release itself is pretty much a non event because of a rock solid integration environment and continuous integration practices. However, it is still almost impossible to exactly predict user reaction and usage. Thus having support (software, network, system, etc.) available can be crucial after software is released. And typically the best support personnel do not work on the weekends or late at night.
  2. A lot of groups end their iterations on Friday. Thus if they are also under the gun to deploy, there is the possibility that they may be rushing to finalize those last features. This can create a loss of quality and also create a fatigued team. Both of these risks are not beneficial to deployment. So if the team hass been pushed to finish the iteration, you may want to give them a chance to recover before you push them to do a major deployment.
  3. Related to the issue above is the situation in which the production environment is a complex one. Deployment may become a large process in which many changes have to occur at once to support the new features of the release. This will require planning and it can suffer greatly if the rush to finish the iteration takes priority. One database table change that gets missed because of the rush to meet a Friday deployment deadline can cost thousand in e-commerce. If only the planning had been better. (One way to avoid this is to make this part of the iteration with a proper LOE and priority). Scott Ambler discusses dealing with complex deployments in this Planning for Deployment.
  4. How do your users like Friday releases? Is Friday a critical time for them to use the software? If so, then they may not like the hassle of figuring out new/changed functionality.

Do you know of any other reasons to avoid certain times/days of the week for deployment?

Tagged with:
Sep 13

In Rails Session Management Howto, Part III of this series, I described memory based session storage approaches. The mem_cache_store approach provides fast access to the session data and unparalleled scaling, but doesn’t provide rock solid reliability (because it is ultimately a cache). It also maybe overkill for a lot of applications. In this post, I will discuss the final approach which is database based sessions.

There are a couple of options. The first is to use DRb storage. With the drb_store, the session data is marshaled to a DRb server. The DRb server is accessible from multiple servers so you can scale out to many servers for your application. And it is also reliable. DRb stands for distributed Ruby and more information about DRb and DRb server is available in Intro to DRb. Performance is reported to be very solid with DRb based session storage.

The second option is to utilize the built in Active Record capability of rails. I like the active_record_store because it is easy to configure and immediately provides scalability and reliability to session data storage. Performance is largely dependent on the database server infrastructure, which is a well know field and has many different optimization possibilities. Rails provides a simple way to configure the sessions by running rake db:sessions:create. Then you can just run the migration to create the table via rake db:migrate.

As pointed out by the authors of Agile Web Development with Rails, the proper choice of session storage is uniquely application and environment driven. There is an older study, by Scott Barron comparing the performance of some of these approaches. Although the results might have changed slightly, the considerations and insights are still probably very valid.

I personally use the active_record_store as my default approach. It requires no special outside expertise to implement and for most applications it is scalable and reliable. What do you use?

Tagged with:
Sep 07

In Rails Session Management Howto, Part II, I discussed using the PStore approach for session data storage. The p_store based sessions utilize the local OS file system. In this post, I will present memory based storage approaches for session management in Rails.

The first approach is to use memory_store based sessions. With MemoryStore the session objects are kept in the applications memory with no serialization necessary. While this approach will make it extremely fast for an application to move objects in and out of a session store, it is not a reliable method because the memory where the session data is stored is only available to a single server. Thus it also does not scale well since it requires sticky sessions.

The second approach utilizes memcached, a high-performance, distributed memory object caching system. Memcached is used by some of the largest websites in the world and is certainly a very solid approach for session storage. The mem_cache_store based sessions meet the criteria of scalability (just add more servers) but this approach is still not reliable. Because it is a cache you still need to use some form of reliable storage for your session data, such as a database store. But if you need super fast reads of the session data across multiple servers, then memcached is really the best performing approach. You can find some discussion of that approach discussed in Sessions. Several memcached Ruby clients are available including RMemCache.

For more discussion of these memory based sessions and their configuration, I recommend you pickup a copy of the excellent reference Agile Web Development with Rails.

Are you using a memory based session approach? How do you scale and protect against server crashes (or maintenance)?

Tagged with:
Aug 30

In Rails Session Management Howto Part I, I introduced the concepts of managing http sessions with Rails and explored the first approach, cookie based sessions. A couple of the limits were the size of the data that can be stored in the session and the lack of encryption as the data is transferred from browser to server. The next approach is to store session data in a flat file on the server in what is known as the PStore format. This format stores the serialized (marshaled) form of the session data on the file system. The location and name (actually just the prefix for the name) of the file can be configured in the environment.rb file. Refer to Agile Web Development with Rails by Dave Thomas for details on the syntax and configuration.

The benefits of using the p_store based sessions are that the data is securely kept on the server and never crosses the network between the browser and server. This provides security and also reduces bandwidth usage. The size limit of the session data is also greatly increased (limited by your system IO).

What happens when scaling the number of servers? Clearly each server can not have an individual PStore unless one is using “sticky” sessions and are willing to have users lose their session data when a server fails. This is not an optimal situation for scalable, reliable, load-balanced systems. When there is more than one server, then it is necessary to have the PStore file available to all servers because subsequent http requests may be directed to a different server each time. One way to do this is to place the PStore file on a network mounted storage system.

Thus with a p_store based session there is increased data security and reduced bandwidth usage vs a cookie based session. However, there is also now some challenging server configuration choices and network file storage. Thus it is a IO limited solution which requires a lot of optimization and monitoring. For some applications this might not be a problem and should be tested. In an application with many simultaneous sessions the number of PStore files can grow very large.

I’ll also briefly mention that there is a file_store option for sessions in Rails which also uses flat files, but it is rarely used because the session data must be strings.

Is anyone using p_store based sessions in their applications? Is it scalable? Is it reliable when servers failover?

In the next part of this series I will examine some memory based sessions.

Tagged with:
preload preload preload