Preparation of training materials

For the past 7 years I have participated in tens of training courses - local and abroad - and trained hundreds if not thousands of lectures out there. Since I do regular classes on some topics, I need to polish my materials on a regular basis by doing research, testing and exploration to keep them up to date.

There is a variety of new technologies to be covered and I have to prepare materials from scratch for them after receiving a company request. What comes next are meetings with the client, management staff and target auditory, precisely calculating the amount of hours needed, topics, group knowledge level and the amount of exercises needed (unless it is a seminar-based course with no labs). Important thing is targeting the right group. I do send some quiz/test based forms to be filled which cover subjective evaluations from the trainees as well as technical questions helping me to figure out their level and averegize that value with their own personal marks. Forming a group of the same level is important for the pace - should be neither too quick, nor too slow.

Normally, for a training we need to do a research of the technology even if we're well acquainted with it. Some history at first (project creators, release date, where did inspiration come from), related technologies (comparison tables), current release and newest features. I have to create an agenda of topics and topic contents as well - just as books do. Then I need to design a structured material, put some code samples, graphics (diagrams), create demos and exercises for the people if needed. So we have extensive slides, demos on-site and exercises for the labs or at home.

I usually follow the same pattern creating a course content. I do create skeletons of presentations and estimate very roughly in a matter of time. Then I follow this flow. Here it is.

  1. Check previous training course folders. Since I've trained a number of training classes already, there is a chance that I can reuse some presentations or at least slides (graphics, code samples, comparisons, stats) for the current training as well. I could also mix 3 presentations and create a pretty neat and useful one here.
  2. Check out Slideshare. We're creating slides. Then why not we check for other slides from authors and see if we could learn anything new or gain inspiration about demos and labs.
  3. Google for other similar training courses. Since I've already built my skeleton and my timing, it'd be great to compare it to several other training classes out there. This could be _very_ subjective as it depends on the level of understanding of the group, the type of the training (lectures, samples, Q&A, labs, other) but still some synchronization could be done based on similar training programs.
  4. Search for tutorials and FAQ. Straight forward, browse online for tutorials and FAQ sections that could help adding some piece of information or example in the slides.
  5. Google with filetype:ppt. An extra slideshare search addition for other presentations all over the world. I managed to find a Chinese presentation once that was not readable for me, but 2 of the graphics were very helpful.
  6. Check for libraries and demos. Sites such as Java2s and other resources are so called 'code repositories'. Same goes for github and sourceforge and more, so you could find great sample projects or code snippets, well documented there.
  7. Check on YouTube. I used to not search for video tuts for several years, but latest trends show that many techs are covered as video tutorials and samples on YouTube which is great. So use it as an extra resource.
  8. Google for standard search phrases for the technology X such as: X examples, X demos, X code samples, X library. It helps.
  9. DZone/Reddit search. Social bookmarking sites and directories could be related to the tech you need to cover. Try them as well.
  10. Amazon. Similar to the training courses, check for books (to purchase if needed) or see the agenda and topics covered - you might have missed something important in your scope.
  11. #yourtechhere. Twitter has too many people so they could talk about what you need. The chance to read spam is high, but you could find rare facts there.
  12. StackOverflow/Nabble - super interesting Q&A questions for the most frequent issues with the tech could help for the support panel. You can even find the authors of the product you write about.
  13. Podcasts. Still not that popular, but you could download some audio material to your player and listen to it as a study book. Some universities even have open courses.
  14. Cheatsheets. I love them all. They have structured content with graphics or tables for the most important phases on every popular (and not so popular) technology. I even tend to give them to students while doing some exams to help a bit and use their reading memory. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

CakePHP headaches at a glance

@jose_zap has replied to me regarding a tweet of mine comparing CodeIgniter and CakePHP and the different aspects of both technologies.

Since Twitter itself is way restricted into the 140 chars (which I like most usually - less offtopic and media) I will better blog this off here as a couple of things I don't like in Cake.

First of all, I've been doing Cake for a year and a half and have several projects up and running with different web services, sync mechanisms and so on. It's usually one of my preferred platforms (right after WP and Croogo which is actually Cake based) but it doesn't mean that I adore all of the features in it.

Auto recursive models 

By default linking the models in Cake sets a recursive level of 1 - so you get a direct access to level 1 of all corelated models. it is usually nice as you don't have to join or bind models every few requests. The bad part however is that every serious project (and even non-serious ones with more than 10 tables!) gets bloated with so much insignificant data not being used anywhere in the site. When we have a product which has categories and part of an order, which has a user and so on there are lots of queries back there and tons of useless data which leads to reduced performance and page load time. 

Yet again - very useful, especially for non-technicians who have hard time joining and so, but I get different requests from oDesk or local clients with old Cake sites that need optimization and fine tuning cause the site used to work at the beginning but the previous developer uses the standard recursive=1 settings and therefore the more the database records, the more hardcore the end results.


ACL is... well, it sucks big time in my opinion. It has a wrong concepts at the beginning, it is also hard to implement (lots of years technical background and lots of hours, if not days, trying to setup something that needs to be in the core). @jose_zap - I like Croogo's way of setting the jquery matrix of roles and controller actions and predefining the actions for each role. The UI plugin for the standard ACL is too complex in a usability manner and doesn't do the work.

Another thing is the role based auth. Cake does a pretty good job restricting different roles, but the autogeneration of MVC implies that no user-based authentication would be done or so. Another few projects of mine used to fix actions accessible via URL (no controller backend checks) and protecting every single add/edit and listing as index/view from unauthorized data listing. Or in other words - user number 2 is usually able to change the URL and see the listings of user number 3 or click the edit/5 link and edit the records of another user. It is not hard to implement it manually, but it takes time and having the logic predefined and working and generating tens of MVCs from tables opens lots of vulnerabilities out there.


What I don't like here is basically using one table for all translations by default. Once I tried setting different tables for i18n for the different DB tables but it was kinda tough for me to set up the models. Also the multilingual content with the localized data (so to speak i18n with the l10n together) had to be implemented by some third party tutorials with lots of app_controller magic in between.


I had been able to fix all of the concerns above back then. It just happened for them not to be straight forward or workarounds are painful which leads to discomfort while coding or revamping an application.

These are my top concerns for Cake so far. Pretty sure I've had many more back there, but now as I do explore a framework, I usually look for several things first:

  • multilingual support
  • user management
  • security
  • design adaptiveness and plugins capabilities

I think I have some hard time with AJAX as well, but can't recall the specific projects with it. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

Web security workshops in Saudi Arabia

I am off for few weeks delivering several 3-days trainings on web app security best practices in Saudi Arabia. It's my first training outside of Europe so I had to spend some time exploring the culture of the nation here which is pretty exciting.

Already had two trainings so far and few more to go. There is going to be another batch on November for another colleague of mine and I'm trying to sync my materials to serve as a good reference during the next batch of trainings. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

DX Image Box – Lighbox Croogo plugin

Today I released on github DX Image Box. It's a Fancybox wrapper that hooks in the Croogo plugin system so a developer could easily integrate lightbox integration with two lines of code. 

Last time I was doing Croogo work was in December, but recently I had to do some development on small Croogo-based projects and due to the chance that some of the features are going to be reused later, decided to do some plugin work. This is the first plugin released and I will consider contributing another one or two small thanks to Fahad's work on Croogo. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

Hourly rates, Amount of work and Availability

Note: This essay is not always applicable while some projects definitely demand a straight work schedule, reporting and collaboration between the team members. However it reflects the majority of projects and clients out there in the wild.

As an employer I prefer to charge on a project basis. It's easier to plan my budgets and the costs of the final results. 

As a contractor I prefer hourly payments. It helps dropping the pressure from incorrect specifications and further negotiations after the project and the budget have already been set.

However I am well acquainted with the pros and cons of both methods. But there are three terms that I believe most clients use in an inappropriate context without gaining the maximum productivity and optimal costs for their projects. We're speaking about the hourly rates, the amount of work and availability.

Hourly rate

As I said I got tired of arguing with clients with dummy specifications and clonings of the "Clients from Hell" samples. I don't expect all of them to be technicians. But usually the time dedicated for meetings, specifications and other meetings during the process exceeds the actual development time. And also the initial meetings and negotiations are still 'Awaiting' and projects are not actually accepted so there is a serious risk of "This price doesn't fit to my budget, I cancel" so... yeah.

So I prefer the hourly rates. I estimate single tasks and modules based on what I know. In case of any misunderstanding we have a short chat/call with detailed data what is the extra work all about, what else needs to be done and how will this reflect the previous estimate. On approval we work, on rejection - I don't waste my time. Simple as that. 

What annoys me most is that clients are looking for specific hour rates. For instance: "I am able to pay $10 per hour" while I am estimating some projects on $20/h. And I get rejected because of my _hourly rate_ without even being able to estimate the amount of work to be done.

This is completely wrong for one simple and basic reason. A task to be solved is based on the following formula:

Solution = amount of time * hourly cost * quality coefficient

Lets start backwards.

  1. Quality coefficient - a decimal variable between 0 and 1 that values the final quality of the solution. For instance, talking about a website creation, a QC of 0 would be completely useless end product. A coef of 1 would be a multilingual website (if needed) which is W3C valid, cross-browser compatible, reusable and well documented source code, stable and secure. 
  2. Hourly cost - simply the price for working for 60 minutes.
  3. Amount of time - here is the tricky moment. This is the time that an expert needs to complete a given task. You know what? My $5-coders I work with are pretty slow and their end work quality often needs refactoring. While they solve a simple problem in 4+ hours with a coefficient of 0.7, my core developers for $20/h complete their assignment in less than an hour with 0.95+ quality. And asking for more.

So, what I am about to tell is: don't judge the coder by the hourly rate. It's irrelevant until you know the end estimate and the quality of the work.

Amount of work

This is another standard requirement which is particularly based on the hourly rate.

In my experience there are 2 types of projects - maximum work for a short term or flowing work which is long term. Most employers require long-term non-stop working. This usually affects the productivity of the worker and his/her motivation as well. 

If you work with an experienced coder or designer, or whatever who could do proficient and high quality work for a short time, you need to give him/her the freedom of the 9-5 working office schedule. Sometimes this is not possible due to some company requirements about the availability and meetings and so on, especially when people do work in different time zones. So here it is step 3 - the availability.


The availability is the time range while the worker is available to contact the project responsible person. However most clients expect that the person is available only during the time of the actual work and also, that he/she is working all the time during the availability period. Which is completely not mandatory.

For projects that need some quick reaction and support the contractor must guarantee availability hours and _not_ work hours. Its just like the support positions - 2 people should be _available_ 12h for any possible exceptional situation. But it doesn't necessary mean that they need to do actual work during this period. 

My practice is working on 2 or 3 projects a week, as the projects vary in their specifications I dedicate 40-50 hours a week at total. I am available 70+ hours a week for a client if needed to get some consulting or just status updates and I do quick and clean work for the 15-25 hours that I have agreed on working for the given project. This usually leads to best results at a minimal cost.

What I could have done for the conservative clients?

I don't like cheating and I usually cancel projects and reject client invitations for people that wouldn't understand this article. But, you know what? If I am confident in my skills (as I am in few technologies I've been polishing my skills in during the last 7 years) I could have done the following:

  • estimate projects completion time multiplied by three or four 
  • giving half of my availability

As a result I would charge for 4 times slower work and 2 times the price I would charge normally with a flexible client. I would mark in my work diary 120 hours of work (3 projects * 40 hours) while it is hardly possible while I actually work on my normal rates and schedule. No one would even doubt if I am a fast and clean worker. But why should I cheat if we could agree on my preferred rate and a flexible working time? Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

Twitter using Drupal

Following Dries and Rob Douglas on Twitter I mentioned in the latest updates that Twitter started using Drupal for community site for the dev team. Dries has described in his blog that Twitter migrated to Drupal using one of their community platforms that I really enjoy. While WP has BuddyPress (and that could be ran separately as well) Drupal has few configurations for social platforms and seems like Twitter is using one of them.

There we are - . I would really like to see a feedback from the Twitter dev team for the usability of the platform. It's a well known fact that Drupal is a great platform for developers but has an non-intuitive interface for end clients which is one of the reasons WordPress is so popular right now. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

ddsmoothmenu arrow dynamic paths

Working with ddsmoothmenu Smooth Navigation Menu by Dynamic Drive for a WordPress project I encountered a stupid lack of setting to provide image paths dynamically. This is a jQuery-based dropdown menu which seems and works fine but requires a static path to the images. Only two images though, for the arrows down and right, but it would be a serious issue when releasing a website or migrating to a new server.

So I decided to add a new setting configurable through the JavaScript call in your call-menu file where you could use php/java/python/whateva to retrieve the correct path dynamically and just pass it. I'm using v.1.5 and I did 2 corrections:

  1. in the ddsmoothmenu.js file replaced all smoothmenu.arrowimages strings with setting.arrowimages (just changing 'smoothmenu' to 'setting' in these three lines). There are 5 strings to be replaced at lines 75 to 77. What I do is add a new setting for them in the next section.
  2. Adding the paths to the images as a setting. So at the end my call includes the dynamic path to be used in the menus. Since I'm using WordPress, here there is my code for calling the menu:
  2. ddsmoothmenu.init({
  3. mainmenuid: "header_top_menu",
  4. orientation: 'h',
  5. classname: 'ddsmoothmenu',
  6. arrowimages: {down:['downarrowclass', '<?php bloginfo("template_url"); ?>/img/down.gif', 23], right:['rightarrowclass', '<?php bloginfo("template_url"); ?>/img/right.gif']},
  7. contentsource: "markup"
  8. });
  9. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

Google Plus is on

Google started their new service called Google+ which is the newest addition to their web toolkit collection. After the fail of Buzz and Wave (I'm sincerely missing the last one) this one stands a chance to survive and definitely shake the Facebook market.

Few hours of clicking around I was happy with the light version of network - the standard minimalistic design (total lack of any design according to some users) and lightweight platform unlike Wave. The invitation procedure that we know since GMail works as well - plenty of mediocre users wanted them to be _in_ only for the being special case. However.

A great review on the subject is the comparison of Plus with Wave and Buzz. I'm hitting the +1 on that one. Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati

Subscribe to comments has to be integrated

WordPress is still number 1 platform for blogging. Top used blogging functionality is blog posting and commenting to blog threads. 

However, people that comment on a blog post normally have no way to get feedback eventually if anyone comments back in the same post. The post author (and administrator) receives notification for the comment but the comment author, on the other hand, has no natural way to be pinged back for a reply. This is a serious leak in the WP standard functionality. It is at least unethical not to inform someone for the reply (which might occur in a day, month, year even more).

WordPress comes with a standard feed for latest posts and feed for recent comments as well. However subscribing for all comments in a blog or finding a specific thread to subscribe for is not usable and not practical as well. The solution is the Subscribe To Comments plugin that adds a checkbox to the comment form which allows one to subscribe for further comments in the same thread. This is completely optional and up to ones preferences, but instead of breaking the whole conversation because of the 'echoing' this provides the instrumentation for a real communication. uses Subscribe To Comments for 2 years or something, it's integrated in their web service. So why is it not included in the platform yet? Digg DZone Facebook Google Google Reader Magnolia reddit SlashDot Technorati