We recently started using Protractor for end to end testing of an angular application. Protractor returns promises when locating elements on the page, in some cases you may need to do your assertions inside of callback functions.
For example:
12345678910
// this works just fineexpect(element(by.id('email')).getText()).toEqual('x@x.com');// this will failexpect(element(by.id('email')).getText().length).toEqual(7);// getText retuns a promise and not a string, instead...element(by.id('email')).getText().then(function(data){expect(data.length).toEqual(7);});
You can use the elementexplorer to debug your locators, but be aware that even non-matched elements will return an ‘Element Finder’ object.
I was searching around stackoverflow trying to figure how to stub out s3 uploads with paperclip when running unit tests; came across a few that looked promising, along the lines of:
Magick::ImageMagickError: WriteBlob Failed `/Users/ME/projects/CLIENT/code/RAILS_APP/tmp/line.png' @ error/png.c/MagickPNGErrorHandler/1804
from /Users/ME/.rvm/gems/ruby-2.1.0/gems/gruff-0.5.1/lib/gruff/base.rb:425:in `write'
from /Users/ME/.rvm/gems/ruby-2.1.0/gems/gruff-0.5.1/lib/gruff/base.rb:425:in `write'
from /Users/ME/projects/CLIENT/code/RAILS_APP/app/models/reports/chart.rb:36:in `generate_line_graph'
from (irb):16
from /Users/ME/.rvm/gems/ruby-2.1.0/gems/railties-3.2.16/lib/rails/commands/console.rb:47:in `start'
from /Users/ME/.rvm/gems/ruby-2.1.0/gems/railties-3.2.16/lib/rails/commands/console.rb:8:in `start'
from /Users/ME/.rvm/gems/ruby-2.1.0/gems/railties-3.2.16/lib/rails/commands.rb:41:in `'
from script/rails:6:in `require'
from script/rails:6:in `main'
The issue: the error is raised when writing to the tmp directory, changing to another directory and it worked as expected?
g.write("public/line.png") and g.write("public/tmp/line.png") both worked
Turns out this was a simple mistake on my part, I just provisioned a new machine and cloned the project repo – I did not have a tmp directory in my project yet, it would be nice if the error message could have just told me so.
This article outlines some basic guidelines to follow when working with timezones in Ruby on Rails applications. Ruby on Rails has great support for timezones, but getting it working correctly can be tricky. Following the techniques below should save you some headache.
Application Settings – use the default UTC settings
Set each request to the logged on users timezone
Display times and dates using the I18n view helpers
Override user timezone when displaying time aware entity
Override user timezone when saving time aware entity
Filter data based on the users timezone
Is it ‘time zone’, ‘timezone’ or ‘time-zone’? Both styles end up being used in rails code base config.time_zone and config.active_record.default_timezone. I use timezone in this article and time_zone in the sample code base. Read more on stackexchange.
The sample includes user profiles with timezone settings, an event model, and a work schedule model. It comes with a small set of sample data as well. To set it up locally, follow the README instructions.
Application Settings
Rails uses the following defaults for a new application
Both can be overriden in your application configuration file (config/application.rb) – don’t do it!
123
# Do NOT do this!!!config.time_zone='Central Time (US & Canada)'config.active_record.default_timezone=:local
Set the timezone for each request
How you determine the ‘current users’ timezone will differ from application to application. In the sample code we are storing the value on the user model in a time_zone field. The sample application uses the techniques from this RailsCast episode. Alternatively you may want to set it based on the client’s browser setting.
Use an around_filter or combo of before_filter and after_filter to set each request’s timezone in the ApplicationController. If we don’t have a current user it will default to UTC.
Display times and dates using the I18n view helpers
The I18n helpers are timezone aware. Aside from rendering the datetimes in the logged on users locale format, they will convert the stored UTC times to the current threads timezone.
On the homepage of the sample application there are numerous examples, also check out the config/locales/en.yml file.
Basic example:
1
I18n.localize(current_user.created_at)
Beware: I18n.localize does not handle nil values. You will need to guard against nils for nullable columns.
Override the display of dates
In the sample application we have an Event model and a WorkSchedule model. Each model has its own timezone. Events are based on where the event will actually occur. In this case displaying the dates in the users timezone makes no sense at all, so we must override the display. There are a few techniques that can be employed:
in_time_zone method (see work schedule in the sample application)
Time.use_zone blocks (see events in the sample application)
in_time_zone example: WorkSchedule overrides the start_at and end_at model attributes, therefore no special handling is needed in the views (of course you still need to use I18n.localize).
Time.use_zone example: The Event model does not override start_at or end_at attributes, but uses Time.use_zone blocks in the views.
<!-- displayed times in the events timezone -->
<% Time.use_zone(@event.time_zone) do %>
<%= l(@event.start_at) %>
<%= l(@event.end_at) %>
<% end %>
<!-- now back to the logged on users timezone -->
<p>
Created at <%= l(@event.created_at) %>
Updated at <%= l(@event.updated_at) %>
</p>
There is odd behavior with Time.use_zone when doing in-memory sorting in ruby (sort_by). See the HomeController in the sample application. Ordered attributes do not convert in the use_zone block. I am not sure if this a bug or by design.
Override timezone when saving data
Sometimes you want to override saving data. Let’s say my profile is set up with ‘Pacific’ timezone, but I am creating an event that will occur in New York (‘Eastern’ timezone). I do not want rails to convert the times to ‘Pacific’. We can reset the current threads timezone to the events timezone before the create and update actions execute.
12345678910111213
classEventsController<ApplicationControllerbefore_action:set_event_time_zone,only:[:create,:update]# ... all action code (unchanged)privatedefset_event_time_zoneifparams[:event]Time.zone=params[:event][:time_zone]endendend
Alternatively this could be done using Time.use_zone blocks in the create and update actions.
Filter data based on the users timezone
Rails will handle most of this automatically if you don’t use raw sql. For example:
If you are filtering datetime data by the day be careful, the below queries are from the sample application logged in as a user with (GMT+06:00) Astana timezone
123456
# WARNING: do not do this# events created yesterdayyesterday=Date.current-1.dayEvent.where("DATE(created_at) = ?",yesterday)# => SELECT "events".* FROM "events" WHERE (DATE(created_at) = '2013-11-14')# => 0 records
the proper way
123456
# events created yesterdaystart_at=(Date.current-1.day).beginning_of_dayend_at=start_at.end_of_day@events_from_yesterday=Event.where("created_at BETWEEN ? AND ?",start_at,end_at)# => SELECT "events".* FROM "events" WHERE (created_at BETWEEN '2013-11-13 18:00:00' AND '2013-11-14 17:59:59')# => 12 records
This is not an issue when filtering on date columns (only datetime).
1234
# rails will do it for us, Date.current is timezone awareMeeting.where("scheduled_on = ?",Date.current)# but do NOT do this, will not work correctly at certain times of the dayMeeting.where("scheduled_on = ?",Date.today)
In some situations you might need to query for data based on a specific timezone
12345678
@corp_office=OpenStruct.new({time_zone:"Eastern Time (US & Canada)"})Time.use_zone(@corp_office.time_zone)dostart_at=Date.current.beginning_of_dayend_at=start_at.end_of_day@events_scheduled_today=Event.where("start_at BETWEEN ? AND ?",start_at,end_at).order(:start_at)end# => SELECT "events".* FROM "events" WHERE (start_at BETWEEN '2013-11-13 05:00:00' AND '2013-11-14 04:59:59') ORDER BY "events"."start_at" ASC
Conclusion
I hope you found this article helpful. All feedback is welcome – I am sure I have left out some key information or just plain got it wrong.
Wikipedia lists IBS (Irritable Bowel Syndrome) as a “Functional gastrointestinal disorder”. It can cause both diarrhea or constipation and often alternates between the two. In short it is “No fun”.
I was first diagnosed with IBS in 2002 and was told to
eat more fiber
take a stress reduction class
I had never heard of such a thing as ‘IBS’ before, it seems it is the ‘catch-all’ diagnosis for stomach issues when the doctors do not know for certain what is causing the issue.
Things that did help me, but I would not say they ‘cured’ me:
reduce meat intake
replace with lentils and other legumes which are high in fiber
increase cooked vegetable intake
eliminate dairy
this made a huge difference for me, for awhile
yoga and meditation
IBS seems to be heavily linked to stress
Even after changing my diet I still had discomfort off and on. At one point in 2007 I thought my appendix was going to burst, went to the emergency room to find out my appendix was fine and that I was just really constipated. Talk about embarrassing.
After that I decided to try some acupuncture treatment. It did help some, but the thing that ‘cured’ my IBS was the advice my acupuncturist gave me:
do NOT eat while you are working on the computer
do NOT eat while you are driving
The jist of it is when its time to eat, its time to eat and nothing else:
chew your food
eat slowly
digest for at least 15 minutes before going back to work
maybe take a short walk while you digest
no reading or other mentally engaging activities
Doing the above, my IBS has dwindled away, it did not happen overnight, it took many months of being very strict about my eating habits – mainly taking the time to digest after a meal before engaging in any work activities.
I am somewhat of a grazer, I like to eat small amounts through out the day while working. My work consists of being on the computer over 90% of my work day. Sometimes my work is stressful, even when it is not ‘stressful’ it is still very engaging and involves a very high level of concentration.
I believe I was training my body to think it was under stress whenever I was eating, weather I was on the computer or not.
If you suffer from IBS I hope the above will work for you as well.
And used this script https://gist.github.com/juniorz/1564581 to import all of my old posts from blogger. It failed to import a few posts but overall worked great!
I do have various formatting issues on older posts, will need to revisit those in the future…
Authoring with markdown is so much better and no need for embedded gist: ya!
Similar to knife-solo for use with chef, supply_drop allows you to provision servers using puppet without the need for a puppet master server. It uses capistrano for executing commands on the remote server. I put together a working sample set of puppet and supply_drop deployment scripts for provisioning a postgres server.
Do you have a recovery plan in case your Postgres server crashes – your daily pg_dump is probably not going to cut it.
Postgres uses Write-Ahead Logging (WAL)
Write-Ahead Logging (WAL) is a standard method for ensuring data integrity. A detailed description can be found in most (if not all) books about transaction processing. Briefly, WAL’s central concept is that changes to data files (where tables and indexes reside) must be written only after those changes have been logged, that is, after log records describing the changes have been flushed to permanent storage. If we follow this procedure, we do not need to flush data pages to disk on every transaction commit, because we know that in the event of a crash we will be able to recover the database using the log: any changes that have not been applied to the data pages can be redone from the log records. (This is roll-forward recovery, also known as REDO.)
Setting up Continuous Archiving and Point-in-Time Recovery (PITR) for your Postgres WAL files is very complex, lucky for us the WAL-e project has simplified this process greatly. WAL-e has utilities for sending all WAL files to an AWS S3 bucket as the log files are being generated. More Information, see: