John O'DuinnIncrease growth and revenue by becoming distributed

In my recent blog post about the one-time-setup and recurring-costs of an office, I mostly focused on financial costs, human distraction costs, and the cost of increased barriers to hiring. This post talks about another important scenario: when your physical office limits potential company revenue.

Pivigo.com is a company in London, England, which connects companies that need help with data science problems with Ph.D data science graduates who are leaving academia looking for real-world problems to solve. This 2.5 year old company was founded by Dr Kim Nilsson, (ex-astronomer and MBA!), and as of today employs 4 people.

For Pivigo to be viable, Kim needed:

  • a pipeline of companies looking for help with their real-world Data Science problems. No shortage there.
  • a pipeline of Ph.D graduates looking for their “first non-academic” project. No shortage there.
  • a carefully curated staff of people who understand both Academic and Commercial worlds are essential to help keep things on track, and make sure the event is a success for everyone. Kim has been quietly, diligently working on growing a world-class team at Pivigo for years. Tricky, but Pivigo’s hiring has been going great – although they are always interested to meet outstanding people!
  • a physical place where everyone could meet and work together.

Physical space turned out to be the biggest barrier to Pivigo’s growth and was also the root cause of some organizational problems:

1) Venue: The venue Pivigo had guaranteed access to could only be used once a year, so they could only do one “event” each year. Alternate venues they could find were unworkable because of financial costs, or the commute logistics in London. Given they could only have one course per year, it was in Pivigo’s interest to have these classes be as large as possible. However, because of the importance of creating a strong network bonding between the participants, physical size of venue, and limits on skilled human staffing, the biggest they could do was ~80 people in this once-a-year event. These limits on the once-a-year event puts a financial cap on the company’s potential revenue.

2) Staffing: These big once-a-year events were super-disruptive event to all the staff at Pivigo. Between the courses, there was administrative work to do – planning materials, interviewing candidates and companies, arranging venue and hotel logistics, etc. However, the “peak load time” during the course clearly outscaled the “low load time” in between courses. Hiring for the “peak load times” of the courses meant that there would be a lot of expensive “low load / idle time” between each peak. The situation is very similar to building capacity in fixed cost physical data centers compared to AWS variable-by-demand costs. To add to the complexity, finding and hiring people with these very specialised skills took a long time, so it was simply not practical to “hire by the hour/day” a la gig-economy. Smoothing out the peaks-and-troughs of human workload was essential for Pivigo’s growth and sustainability. If they could hold courses more frequently, they could hold smaller, more frequent, courses and reduce the “peak load” spike. Also, changing to a faster cadence of smaller spikes would make Pivigo operationally more sustainable and scalable.

3) Revenue: Relying on one big event each year gives a big spike of revenue, which the company then slowly spends out over the year – until the next big event. Each and every event has to be successful, in order for the company to survive the next year. This makes each event a high-risk event for the company. This financial unpredictability limits company long term planning and hiring. Changing to smaller, more frequent, courses make Pivigo’s financial revenue stream healthier, safer and more predictable.

4) Pipeline of applicants: Interested candidates and companies had a once-a-year chance to apply. If they missed the deadline or were turned away because the class was already full, they had to wait an entire year for the next course. Obviously, many did not wait – waiting a year is simply too long. Holding these courses more frequently make it more likely that candidates – and companies – might wait until the next course. Finding a way to increase the cadence of these courses would improve the pipeline for Pivigo.

If Pivigo could find a way to hold these courses more frequently, instead of just once-a-year, then they could accelerate growth of their company. To do this, they had to fix the bottleneck caused by the physical location.

Three weeks ago, Pivigo completed their first ever fully-distributed “virtual” course. It used no physical venue. And it was a resounding success. Just like the typical “in-person” events, teams formed and bonded, good work was done, and complex problems were solved. Pivigo staff, course participants and project sponsors were all happy. Just like usual.

This maps shows everyone’s physical location.
Map of locations

To make this first-ever fully-distributed “virtual” S2DS event successful, we focused on some ideas outlined in my previous presentations here, here and also in my book. Some things I specifically thought were worth highlighting:

1) Keep tools simple Helping people focus on the job-at-hand required removing unnecessary and complex tools. The simpler the tools better. We used zoom, slack and email. After all, people were here to work together on a real-world data science problem, not to learn how to use complex tools.

2) Very crisply organized human processes. None of these people were seasoned “remoties”, so this was all new to all of them. They first met as part of this course. They had to learn how to work together as a team, professionally and as social humans, at the same time as they worked on their project which had to be completed by a fixed deadline.

3) As this was Pivigo’s first time doing this, Kim made a smart decision to explicitly limit size, so there were only 15 people. This gave Kim, Jason and the rest of the staff extra time and space to carefully check in with each of the remote participants and gave everyone best chance of success. Future events will experiment with cohort sizes.

4) Each participant said that they only applied because they could attend “remotely” – even though *none* of them had prior experience working remotely like this. Pivigo were able to interview, and recruit participants who would normally not even apply for the London-based event. The most common reason I heard for not being able to travel to London was disruption to parents with new children – successful applicants worked from their homes on real-world problems, while still being able to take care of their family. The cost of travel to/from England, and the cost of living in London were also mentioned. The need and demand was clearly there. As was their willingness to try something they’d never done before.

5) I note the diversity impact of this new approach. This cohort had a ratio of 26% female / 74% male, while prior in-person S2DS classes typically had a ratio of 35% female / 65% male. This is only one data point, so we’ll watch this with the next S2DS event, and see if there is a trend.

The Virtual S2DS programme was a success. The project outcomes were of similar quality to the campus based events, the participants felt they got a great experience that will help their careers going forward, and, most importantly, the group bonded more strongly than was expected. In a post-event survey, the participants said they would reach out to each other in the future if they had a question or a problem that the network could help with. Interestingly, several of them also expressed an interest in continuing remote working, something they had not considered before.

For Kim and the Pivigo team, this newly-learned ability to hold fully distributed events is game-changing stuff. Physical space is no longer a limiting factor. Now, they can hold more frequent, smaller courses – smoothing down the peaks and troughs of “load”, while also improving the pipelines by making their schedule more timely for applicants. Pivigo are investigating if they could even arrange to run some of these courses concurrently, which would be even more exciting – stay tuned.

Congratulations to Kim and the rest of Pivigo staff. And a big thank you to Adrienne, Aldo, Christine, Prakash, Nina, Lauren, Gordon, Lee, Christien, Rogelio, Sergio, Tiziana, Fabio and Mark for quietly helping prove that this approach worked just fine.

John & Kim.
=====
ps: Pivigo are now accepting applications for their next “virtual” event and their next inperson event. If you are an M.Sc./Ph.D. graduate, with a good internet connection, and looking for your first real-world project, apply here: http://www.s2ds.org/. Companies looking for help with data science problems can get in touch with Kim and the rest of the Pivigo team at info@s2ds.org.

Air MozillaWeb QA Weekly Meeting, 03 Dec 2015

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaOptimizing for Uncertainty

Optimizing for Uncertainty The web is increasingly complex and dynamic. In the natural realm, 'complex adaptive systems' allow for flux and change in tumultuous environments. Our December speaker...

Air MozillaReps weekly, 03 Dec 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Mozilla Reps CommunityRep of the Month – November 2015

Please join us in congratulating Dorothee Danedjo Fouba as Rep of the Month for November!

Dorothee has shown amazing leadership in Cameroon – growing that community from zero to over fifty in just one year. By organizing of a series of events and empowering emerging leaders, Dorothee has shown great talent for bringing people together to learn and understand the potential of Mozilla to improve their world. As Tech Women alumni Dorothee also speaks to, and inspires other women technical leaders in their goals for building Mozilla communities across the world.

Don’t forget to congratulate her on Discourse!

Henrik SkupinResults of the Firefox Automation Survey

November 23rd I blogged about the active survey covering the information flow inside our Firefox Automation team. This survey was open until November 30th and I thank everyone of the participants which have taken the time to get it filled out. In the following you can find the results:

Most of the contributors who are following our activities are with Mozilla for the last 3 years. Whereby half of them joined less than a year ago. There is also a 1:1 split between volunteers and paid staff members. This is most likely because of the low number of responses, but anyway increasing the number of volunteers is certainly something we want to follow-up on in the next months.

The question about which communication channel is preferred to get the latest news got answered with 78% for the automation mailing list. I feel that this is a strange result given that we haven’t really used that list for active discussions or similar in the past months. But that means we should put more focus on the list. Beside that also 55% listening our activities on Bugzilla via component watchers. I would assume that those people are mostly our paid staff who kinda have to follow each others work regarding reviews, needinfo requests, and process updates. 44% of all read our blog posts on the Mozilla A-Team Planet. So we will put more focus in the future to both blog posts and discussions on the mailing list.

More than half of our followers check for updates at least once a day. So when we get started with interesting discussions I would expect good activity throughout the day.

44% of all feel less informed about our current activities. Another 33% answered this question with ‘Mostly’. So it’s a clear indication what I already thought and which clearly needs action on our side to be more communicative. Doing this might also bring more people into our active projects, so mentoring would be much more valuable and time-effective as handling any drive-by projects which we cannot fully support.

A request for the type of news we should do more is definitely for latest changes and code landings from contributors. This will ensure people feel recognized and contributors will also know each others work, and see the effectiveness in regards of our project goals. But also discussions about various automation related topics (as mentioned already above) are highly wanted. Other topics like quarterly goals and current status updates are also wanted and we will see how we can do that. We might be able to fold those general updates into the Engineering Productivity updates which are pushed out twice a month via the A-Team Planet.

Also there is a bit of confusion about the Firefox Automation team and how it relates to the Engineering Productivity team (formerly A-Team). Effectively we are all part of the latter, and the “virtual” Automation team has only been created when we got shifted between the A-Team and QA-Team forth and back. This will not happen anymore, so we agreed on to get rid of this name.

All in all there are some topics which will need further discussions. I will follow-up with another blog post soon which will show off our plans for improvements and how we want to work to make it happen.

Emily DunhamLinode vs AWS

Linode vs AWS

I’m examining a Linode account in order to figure out how to switch the application its instances are running to AWS. The first challenge is that instance types in the main dashboard are described by arbitrary numbers (“UI Name” in the chart below), rather than a statistic about their resources or pricing. Here’s how those magic numbers line up to hourly rates and their corresponding monthly price caps:

RAM Hourly $ Monthly $ UI Name Cores GB SSD
1GB $0.015/hr $10/mo 1024 1 24
2GB $0.03/hr $20/mo 2048 2 48
4GB $0.06/hr $40/mo 4096 4 96
8GB $0.12/hr $80/mo 8192 6 192
16GB $0.24/hr $160/mo 16384 8 384
32GB $0.48/hr $320/mo 32768 12 768
48GB $0.72/hr $480/mo 49152 16 1152
64GB $0.96/hr $640/mo 65536 20 1536
96GB $1.44/hr $960/mo 98304 20 1920

AWS “Equivalents”

AWS T2 instances have burstable performance. M* instances are general-purpose; C* are compute-optimized; R* are memory-optimized. *3 instances run on slightly older Ivy Bridge or Sandy Bridge processors, while *4 instances run on the newer Haswells. I’m disergarding the G2 (GPU-optimized), D2 (dense-storage), and I2 (IO-optmized) instance types from this analysis.

Note that the AWS specs page has memory in GiB rather than GB. I’ve converted everything into GB in the following table, since the Linode specs are in GB and the AWS RAM amounts don’t seem to follow any particular pattern that would lose information in the conversion.

Hourly price is the Linux/UNIX rate for US West (Northern California) on 2015-12-03. Monthly price estimate is the hourly price multiplied by 730.

Instance vCPU GB RAM $/hr $/month
t2.micro 1 1.07 .017 12.41
t2.small 1 2.14 .034 24.82
t2.medium 2 4.29 .068 49.64
t2.large 2 8.58 .136 99.28
m4.large 2 8.58 .147 107.31
m4.xlarge 4 17.18 .294 214.62
m4.2xlarge 8 34.36 .588 429.24
m4.4xlarge 16 68.72 1.176 858.48
m4.10xlarge 40 171.8 2.94 2146.2
m3.medium 1 4.02 .077 56.21
m3.large 2 8.05 .154 112.42
m3.xlarge 4 16.11 .308 224.84
m3.2xlarge 8 32.21 .616 449.68
c4.large 2 4.02 .138 100.74
c4.xlarge 4 8.05 .276 201.48
c4.2xlarge 8 16.11 .552 402.96
c4.4xlarge 16 32.21 1.104 805.92
c4.8xlarge 36 64.42 2.208 1611.84
c3.large 2 4.02 .12 87.6
c3.xlarge 4 8.05 .239 174.47
c3.2xlarge 8 16.11 .478 348.94
c3.4xlarge 16 32.21 .956 697.88
c3.8xlarge 32 64.42 1.912 1395.76
r3.large 2 16.37 .195 142.35
r3.xlarge 4 32.75 .39 284.7
r3.2xlarge 8 65.50 .78 569.4
r3.4xlarge 16 131 1.56 1138.8
r3.8xlarge 32 262 3.12 2277.6

Comparison

Linode and AWS do not compare cleanly at all. The smallest AWS instance to match a given Linode type’s RAM typically has fewer vCPUs and costs more in the region where I compared them. Conversely, the smallest AWS instance to match a Linode type’s number of cores often has almost double the RAM of the Linode, and costs substantially more.

Switching from Linode to AWS

When I examine the Servo build machines’ utilization graphs via the Linode dashboard, it becomes clear that even their load spikes aren’t fully utilizing the available CPUs. To view memory usage stats on Linode, it’s necessary to configure hosts to run the longview client. After installation, the client begins reporting data to Linode immediately.

After a few days, these metrics can be used to find the smallest AWS instance whose specs exceed what your application is actually using on Linode.

Sources:

Hal WineTuning Legacy vcs-sync for 2x profit!

Tuning Legacy vcs-sync for 2x profit!

One of the challenges of maintaining a legacy system is deciding how much effort should be invested in improvements. Since modern vcs-sync is “right around the corner”, I have been avoiding looking at improvements to legacy (which is still the production version for all build farm use cases).

While adding another gaia branch, I noticed that the conversion path for active branches was both highly variable and frustratingly long. It usually took 40 minutes for a commit to an active branch to trigger a build farm build. And worse, that time could easily be 60 minutes if the stars didn’t align properly. (Actually, that’s the conversion time for git -> hg. There’s an additional 5-7 minutes, worst case, for b2g_bumper to generate the trigger.)

The full details are in bug 1226805, but a simple rearrangement of the jobs removed the 50% variability in the times and cut the average time by 50% as well. That’s a savings of 20-40 minutes per gaia push!

Moral: don’t take your eye off the legacy systems – there still can be some gold waiting to be found!

Mitchell BakerThunderbird Update

This message is a summary and an update to a message about Thunderbird that I sent to Mozilla developers on Monday. Here are the key points. First, Thunderbird and Firefox are interconnected in a few different ways. They are connected through our technical infrastructure. Both use Mozilla build and release systems. This seems arcane but […]

Air MozillaFirefox OS London Meetup - Firefox OS Add-Ons

Firefox OS London Meetup - Firefox OS Add-Ons This is a session of the Firefox OS London meetup, dedicated to Firefox OS add-ons. You can find a quick recap of what's new in...

Mozilla WebDev CommunityBeer and Tell – November 2015

Once a month, web developers from across the Mozilla Project get together to design programming languages that are intentionally difficult to reason about. While we advanced the state-of-the-art in side effects, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Peterbe: Headsupper.io

Peterbe started us off with headsupper.io, a service that sends notification emails when you commit to a GitHub project with a specific keyword in your commit message. The service is registered as a Github webhook, and you can configure the service to only send emails on new tags if you so desire.

Osmose: Advanced Open File (Round 2)

Next up was Osmose (that’s me!), with an Atom package for opening files called Advanced Open File. Advanced Open File adds a convenient modal dialog for finding files to open or create that aims to replace use of the system file dialog. Previously featured on Beer and Tell, today’s update included news of a rewrite in ES2015 using Babel, test coverage, Windows path fixes, and more!

Kumar: React + Redux Live Reload

Kumar shared a demo of an impressive React and Redux development setup that includes live-reloading of the app as the code changes, as well as a detailed view of the state changes happening in the app and the ability to walk through the history of state changes to debug your app. The tools even replay state changes after live-reloading for an impressively short feedback loop during development.

Bwalker: ebird-mybird

Bwalker was next with a site called ebird-mybird. eBird is a bird observation checklist that bird watchers can use to track their observations. ebird-mybird reads in a CSV file exported from eBird and displays the data in various useful forms on a static site, including aggregate sightings by year/month and sightings categorized by species, location, and date.

The site itself is a frontend app that uses C3 for generating charts, PapaParse for parsing the CSV files, and Handlebars for templating.

Potch: Pseudorandom Number Generator

Last up was Potch with a small experiment in generating pseudorandom numbers in JavaScript. Inspired by a blog post about issues with Math.random in V8, Potch create a very simple Codepen that draws on a canvas based on custom-generated random numbers.

If you need sound random number generation, the blog post recommends crypto.randomBytes, also included in the Node standard library.


This week’s result was a programming language composed entirely of pop culture references, including a time-sensitive compiler that assigns optimization levels based on how current your references are.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Christian LegnittoStar rating is the worst metric I have ever seen

Note: The below is a slightly modified version of a rant I posted internally at Facebook when I was shipping their mobile apps. Even though the post is years old, I think the issues with star rating still apply in general. These days I mainly rant on Twitter.

Not only is star rating the worst metric I have ever seen at an engineering company, I think it is actively encouraging us to make wrong and irrational decisions.

My criticisms, in no particular order:

1. We can game it easily.

On iOS we prompt1 people to rate our app and get at least a 1/2 a star bump. Is that a valid thing to do or are we juicing the stats? We don’t really know. On Android we don’t prompt…should we artificially add in a 1/2 a star there to make up for the lack of prompt and approximate the “real” rating? 2

We’re adding in-app rating dialogs to both platforms, which can juice the stats even more3. If we are able to add a simple feature–which I think we should add for what it’s worth–and wildly swing a core metric without actually changing the app itself, I would argue the core metric is not reflective of the state of the app.

2. We don’t understand it.

The star rating is up on Android…we don’t really know why. The star rating is down on iOS and we think we might know why, but we still have big countdown buckets like “performance”. For a concrete example, in the Facebook for Android release before Home we shipped the crashiest release ever…and the star rating was up! We think it was because we added a much-requested feature and people didn’t care about the crashes but we have no way to be sure.

When users give star ratings they are not required to enter text reviews, leaving us blind and with no actionable information for those ratings. So even when we cluster on text reviews (using awesome systems and legit legwork by the data folks) we are working with even fewer data points to try to understand what is happening.4

Finally, we have fixed countdown bugs on both platforms in the last quarter…we haven’t seen a step function up or down on either star rating….the trends are pretty constant. This implies that we don’t really know what levers to pull and what they get us.

3. Vocal minorities skewing risk vs reward reasoning.

The absolute number of star ratings is pretty low, so vocal minorities can swing it wildly–representative sample this is not. For example, on the latest iOS app we think 37% of 1-star reviews can be attributed to a crash on start. Based on what we know, the upper bound of affected users is likely ~1MM, which at 130MM MAU5 that’s 0.7%. The fix touches a critical component of the app and mucks around with threading (via blocks) and the master code is completly different. So 0.7% of users make up 37% of our 1 star reviews because of one bug (we think) and we are pushing out a hotfix touching the startup path because of the “37%” when we should really be focusing on the “0.7%”. I think that is the right decision if we put a lot of weight on star rating but it isn’t the right decision generally. Note that we did not push out a hotfix for the profile picture uploading failure issue in the same release because the 0.5% of DAU affected wasn’t seen as worth the risk and churn.

4. It’s fluid-ish.

A user can give us a star rating and then go back and change it. Often they do, but frequently they don’t (we think). This means our overall star ratings likely have an inertia coefficient and may not reflect the current state of the app. We have no visibility into how much this affects ratings and in what ways. If we fix the iOS crash mentioned above, what percent of users will go back and change their star rating from 1 to something else? As far as I know this inertia coefficient isn’t included in any analysis and isn’t really accounted for in our reasoning and goals.6

5. One star != bad experience.

Note: I added #5 today, it wasn’t in the original post.

Digging into our star rating, some curious behavior emerged:

  • The app stores show reviews on the app listing page. The algorithm that chooses which reviews to show must have some balance component as it usually shows at least one negative and positive review. We found that users in certain countries noticed this and would rate us as 1 star just to see their name on the listing page!
  • There were a number of 1 star ratings with very positive reviews attached. It turns out that in some cultures 1 star is the best (“we’re number one”) so those users were trying to give us the best rating and instead gave us the worst!

Of course, there is both the standard OMG CHANGE reaction (“Why am I being forced to install Messenger?”) and user support issues (“I am blocked from sending friend requests, please help me!”) that show up frequently in 1 star reviews too. While both of those are important to capture and measure, they don’t really reflect on the quality of the app or a particular release.

The emperor has no clothes.

Everyone working on mobile knows about these issues and has been going along with star rating due to the idea that a flawed metric is better than no metric. I don’t think even using star rating as a knowingly flawed metric is useful from what I’ve seen over the last quarter. I think we should keep an eye on it as a vanity metric. I think we should work to capture that feedback in-app so we can be in control and get actionable data. I think we should be aware of it as an input to our reasoning about hotfixes but make it clear the star rating itself has no value and shouldn’t be optimized for in a specific release cycle.


  1. Via Appirater at the time.
  2. The idea being that ratings and reviews naturally skew negative due to selection bias.
  3. Many apps actually do this. They prompt to rate the app. If it is rated low, they ask for in-app feedback. If it is rated high, they ask to rate in the app store and redirect.
  4. This is slightly better on Android now that ratings show the device model and OS version…at least there is some actionable information.
  5. These numbers are so old!
  6. Note: this was written before the iOS rating was split into “current version” and “all versions”. This is also affected a bit by Facebook shipping so fast now (every 2 weeks!) so the current version star rating is always resetting on iOS. On Android I believe there is still one rating for all app versions and this is a larger issue.

Air MozillaThe Joy of Coding - Episode 37

The Joy of Coding - Episode 37 mconley livehacks on real Firefox bugs while thinking aloud.

Daniel PocockIs giving money to Conservancy the best course of action?

There has been a lot of discussion lately about Software Freedom Conservancy's fundraiser.

Various questions come to my mind:

Is this the only way to achieve goals such as defending copyright? (There are other options, like corporate legal insurance policies)

When all the options are compared, is Conservancy the best one? Maybe it is, but it would be great to confirm why we reached that conclusion.

Could it be necessary to choose two or more options that complement each other? Conservancy may just be one part of the solution and we may get a far better outcome if money is divided between Conservancy and insurance and something else.

What about all the other expenses that developers incur while producing free software? Many other professionals, like doctors, do work that is just as valuable for society but they are not made to feel guilty about asking for payment and reimbursement. (In fact, for doctors, there is no shortage of it from the drug companies).

There seems to be an awkwardness about dealing with money in the free software world and it means many projects continue to go from one crisis to the next. Just yesterday on another mailing list there was discussion about speakers regularly asking for reimbursement to attend conferences and at least one strongly worded email appeared questioning whether people asking about money are sufficiently enthusiastic about free software or if they are only offering to speak in the hope their trip will be paid.

The DebConf team experienced one of the more disappointing examples of a budget communication issue when developers who had already volunteered long hours to prepare for the event then had to give up valuable time during the conference to wash the dishes for 300 people. Had the team simply insisted that the high cost of local labor was known when the country was selected then the task could have been easily outsourced to local staff. This came about because some members of the community felt nervous about asking for budget and other people couldn't commit to spend.

Rather than stomping on developers who ask about money or anticipate the need for it in advance, I believe we need to ask people if money was not taboo, what is the effort they could contribute to the free software world and how much would they need to spend in a year for all the expenses that involved. After all, isn't that similar to the appeal from Conservancy's directors? If all developers and contributors were suitably funded, then many people would budget for contributions to Conservancy, other insurances, attending more events and a range of other expenses that would make the free software world operate more smoothly.

In contrast, the situation we have now (for event-related expenses) is that developers funding themselves or with tightly constrained budgets or grants often have to spend hours picking through AirBNB and airline web sites trying to get the best deal while those few developers who do have more flexible corporate charge cards just pick a convenient hotel and don't lose any time reading through the fine print to see if there are charges for wifi, breakfast, parking, hidden taxes and all the other gotchas because all of that will be covered for them.

With developer budgets/wishlists documented, where will the money come from? Maybe it won't appear, maybe it will. But if we don't ask for it at all, we are much less likely to get anything. Mozilla has recently suggested that developers need more cash and offered to put $1 million on the table to fix the problem, is it possible other companies may see the benefit of this and put up some cash too?

The time it takes to promote one large budget and gather donations is probably far more efficient than the energy lost firefighting lots of little crisis situations.

Being more confident about money can also do a lot more to help engage people and make their participation sustainable in the long term. For example, if a younger developer is trying to save the equivalent of two years of their salary to paying a deposit on a house purchase, how will they feel about giving money to Conservancy or pay their own travel expenses to a free software event? Are their families and other people they respect telling them to spend or to save and if our message is not compatible with that, is it harder for us to connect with these people?

One other thing to keep in mind is that budgeting needs to include the costs of those who may help the fund-raising and administration of money. If existing members of our projects are not excited about doing such work we have to be willing to break from the "wait for a volunteer or do-it-yourself" attitude. There are so many chores that we are far more capable of doing as developers that we still don't have time for, we are only fooling ourselves if we anticipate that effective fund-raising will take place without some incentives going back to those who do the work.

QMOFirefox 43 Beta 7 Testday Results

Hi mozillians! \o/

The last Friday, November 27th, we held Firefox 43.0 Beta 7 Testday and it was another successful event!  

First, many thanks go out to Moin Shaikh, Amlan Biswas, Iryna Thompson and Bangladesh Community: Hossain Al IkramNazir Ahmed SabbirT.M. Sazzad Hossain, Khalid Syfullah Zaman, Raihan Ali, Rezaul Huque Nayeem, Kazi Nuzhat Tasnem, Nazmus Shakib Robin, Sajedul Islam, Amlan Biswas, Tahsan Chowdhury Akash, Forhad Hossain, Sayed Mohammad Amir, Tanjil Haque, Saheda Reza Antora, Towkir Ahmed, Mohammed Jawad Ibne Ishaque, Fazle Rabbi, Jahir Islam, Umar Nasib, Mohammad Maruf Islam, Md. Faysal Alam Riyad, Ashickur Rahman, Md. Ehsanul Hassan, Md. Rahimul Islam and Rakibul Islam Ratul for getting involved – your help is always greatly appreciated!

Secondly, a big thank you to all our active moderators 😉

Results:

Keep an eye on QMO for upcoming events! 😉

 

Tarek ZiadéManaging small teams

In the past three years, I went from being a developer in a team, to a team lead, to a engineer manager. I find my new position is very challenging because of the size of my team, and the remote aspects (we're all remotes.)

When you manage 4/5 people, you're in that weird spot where you're not going to spend 100% of your time doing manager stuff. So for the remaining time, the obvious thing to do is to help out your team by putting back your developer hat.

But switching hats like this has a huge pitfall: you are the person giving people some work to do depending on the organization priorities and you are also helping out developing. That puts you in a position where it's easy to fall into micromanagement: you are asking someone or a group of person to be accountable for a task and you are placing yourself on both sides.

I don't have any magic bullet to fix this, besides managing a bigger team where I'd spent 100% of my time on management. And I don't know if/when this will happen because teams sizes depends on the organization priorities and on my growth as a manager.

So for now, I am trying to set a few rules for myself:

  1. when there's a development task, always delegate it to someone it the team and propose your help as a reviewer. Do not lead any development task, but try to have an impact on how things move forward, so they go into the direction you'd like them to go as a manager.
  2. Every technical help you are doing for your team should be done by working under the supervision of a member of your team. You are not a developer among other developers in your own team.
  3. If you lead a task, it should be an isolated work that does not direcly impact developers in the team. Like building a prototype etc.
  4. Never ever participate in team meetings with a developer hat on. You can give some feedback of course, but as a manager. If there are some technical points where you can help, you should tackle them through 1:1s. See #1

There. That's what I am trying to stick with going forward. If you have more tips I'll take them :)

I see this challenge as an interesting puzzle to solve, and a key for me to maximize my team's impact.

Coding was easier, damned...

Daniel StenbergWhat’s new in curl

CURL keyboardWe just shipped our 150th public release of curl. On December 2, 2015.

curl 7.46.0

One hundred and fifty public releases done during almost 18 years makes a little more than 8 releases per year on average. In mid November 2015 we also surpassed 20,000 commits in the git source code repository.

With the constant and never-ending release train concept of just another release every 8 weeks that we’re using, no release is ever the grand big next release with lots of bells and whistles. Instead we just add a bunch of things, fix a bunch of bugs, release and then loop. With no fanfare and without any press-stopping marketing events.

So, instead of just looking at what was made in this last release, because you can check that out yourself in our changelog, I wanted to take a look at the last two years and have a moment to show you want we have done in this period. curl and libcurl are the sort of tool and library that people use for a long time and a large number of users have versions installed that are far older than two years and hey, now I’d like to tease you and tell you what can be yours if you take the step straight into the modern day curl or libcurl.

Thanks

Before we dive into the real contents, let’s not fool ourselves and think that we managed these years and all these changes without the tireless efforts and contributions from hundreds of awesome hackers. Thank you everyone! I keep calling myself lead developer of curl but it truly would not not exist without all the help I get.

We keep getting a steady stream of new contributors and quality patches. Our problem is rather to review and receive the contributions in a timely manner. In a personal view, I would also like to just add that during these two last years I’ve had support from my awesome employer Mozilla that allows me to spend a part of my work hours on curl.

What happened the last 2 years in curl?

We released curl and libcurl 7.34.0 on December 17th 2013 (12 releases ago). What  did we do since then that could be worth mentioning? Well, a lot, and then I’m going to mostly skip the almost 900 bug fixes we did in this time.

Many security fixes

Almost half (18 out of 37) of the security vulnerabilities reported for our project were reported during the last two years. It may suggest a greater focus and more attention put on those details by users and developers. Security reports are a good thing, it means that we address and find problems. Yes it unfortunately also shows that we introduce security issues at times, but I consider that secondary, even if we of course also work on ways to make sure we’ll do this less in the future.

URL specific options: –next

A pretty major feature that was added to the command line tool without much bang or whistles. You can now add –next as a separator on the command line to “group” options for specific URLs. This allows you to run multiple different requests on URLs that still can re-use the same connection and so on. It opens up for lots of more fun and creative uses of curl and has in fact been requested on and off for the project’s entire life time!

HTTP/2

There’s a new protocol version in town and during the last two years it was finalized and its RFC went public. curl and libcurl supports HTTP/2, although you need to explicitly ask for it to be used still.

HTTP/2 is binary, multiplexed, uses compressed headers and offers server push. Since the command line tool is still serially sending and receiving data, the multiplexing and server push features can right now only get fully utilized by applications that use libcurl directly.

HTTP/2 in curl is powered by the nghttp2 library and it requires a fairly new TLS library that supports the ALPN extension to be fully usable for HTTPS. Since the browsers only support HTTP/2 over HTTPS, most HTTP/2 in the wild so far is done over HTTPS.

We’ve gradually implemented and provided more and more HTTP/2 features.

Separate proxy headers

For a very long time, there was no way to tell curl which custom headers to use when talking to a proxy and which to use when talking to the server. You’d just add a custom header to the request. This was never good and we eventually made it possible to specify them separately and then after the security alert on the same thing, we made it the default behavior.

Option man pages

We’ve had two user surveys as we now try to make it an annual spring tradition for the project. To learn what people use, what people think, what people miss etc. Both surveys have told us users think our documentation needs improvement and there has since been an extra push towards improving the documentation to make it more accessible and more readable.

One way to do that, has been to introduce separate, stand-alone, versions of man pages for each and very libcurl option. For the functions curl_easy_setopt, curl_multi_setopt and curl_easy_getinfo. Right now, that means 278 new man pages that are easier to link directly to, easier to search for with Google etc and they are now written with more text and more details for each individual option. In total, we now host and maintain 351 individual man pages.

The boringssl / libressl saga

The Heartbleed incident of April 2014 was a direct reason for libressl being created as a new fork of OpenSSL and I believe it also helped BoringSSL to find even more motivation for its existence.

Subsequently, libcurl can be built to use either one of these three forks based on the same origin.  This is however not accomplished without some amount of agony.

SSLv3 is also disabled by default

The continued number of problems detected in SSLv3 finally made it too get disabled by default in curl (together with SSLv2 which has been disabled by default for a while already). Now users need to explicitly ask for it in case they need it, and in some cases the TLS libraries do not even support them anymore. You may need to build your own binary to get the support back.

Everyone should move up to TLS 1.2 as soon as possible. HTTP/2 also requires TLS 1.2 or later when used over HTTPS.

support for the SMB/CIFS protocol

For the first time in many years we’ve introduced support for a new protocol, using the SMB:// and SMBS:// schemes. Maybe not the most requested feature out there, but it is another network protocol for transfers…

code of conduct

Triggered by several bad examples in other projects, we merged a code of conduct document into our source tree without much of a discussion, because this is the way this project always worked. This just makes it clear to newbies and outsiders in case there would ever be any doubt. Plus it offers a clear text saying what’s acceptable or not in case we’d ever come to a point where that’s needed. We’ve never needed it so far in the project’s very long history.

–data-raw

Just a tiny change but more a symbol of the many small changes and advances we continue doing. The –data option that is used to specify what to POST to a server can take a leading ‘@’ symbol and then a file name, but that also makes it tricky to actually send a literal ‘@’ plus it makes scripts etc forced to make sure it doesn’t slip in one etc.

–data-raw was introduced to only accept a string to send, without any ability to read from a file and not using ‘@’ for anything. If you include a ‘@’ in that string, it will be sent verbatim.

attempting VTLS as a lib

We support eleven different TLS libraries in the curl project – that is probably more than all other transfer libraries in existence do. The way we do this is by providing an internal API for TLS backends, and we call that ‘vtls’.

In 2015 we started made an effort in trying to make that into its own sub project to allow other open source projects and tools to use it. We managed to find a few hackers from the wget project also interested and to participate. Unfortunately I didn’t feel I could put enough effort or time into it to drive it forward much and while there was some initial work done by others it soon was obvious it wouldn’t go anywhere and we pulled the plug.

The internal vtls glue remains fine though!

pull-requests on github

Not really a change in the code itself but still a change within the project. In March 2015 we changed our policy regarding pull-requests done on github. The effect has been a huge increase in number of pull-requests and a slight shift in activity away from the mailing list over to github more. I think it has made it easier for casual contributors to send enhancements to the project but I don’t have any hard facts backing this up (and I wouldn’t know how to measure this).

… as mentioned in the beginning, there have also been hundreds of smaller changes and bug fixes. What fun will you help us make reality in the next two years?

The Mozilla BlogVisualizing the Invisible

Today, online privacy and threats like invisible tracking from third parties on the Web seem very abstract. Many of us are either not aware of what’s happening with our online data or we feel powerless because we don’t know what to do. More and more, the Internet is becoming a giant glass house where your personal information is exposed to third parties who collect and use it for their own purposes.

We recently released Private Browsing with Tracking Protection in Firefox – a feature focused on providing anyone using Firefox with meaningful choice over third parties on the Web that might be collecting data without their understanding or control. This is a feature which addresses the need for more control over privacy online but is also connected to an ongoing and important debate around the preservation of a healthy, open Web ecosystem and the problems and possible solutions to the content blocking question.

The Glass House

Earlier this month we dedicated a three-day event to the topic of online privacy in Hamburg, Germany. Today, we would like to share some impressions from the event and also an experiment we filmed on the city’s famous Reeperbahn.

Our experiment?

We set out to see if we could explain something that is not easily visible, online privacy, in a very tangible way. We built an apartment fully equipped with everything one needs to enjoy a short trip to Germany’s northern pearl. We made the apartment available to various travelers arriving to stay the night. Once they logged onto the apartment’s Wi-Fi, all the walls were removed, revealing the travelers to onlookers and external commotion caused when their private information turned out to be public.

The travelers’ responses are genuine.

That said, we did bring in a few actors for dramatic effect to help highlight a not-so-subtle reference to what can happen to your data when you aren’t paying attention. Welcome to the glass house.

While the results of the experiment are intended to educate and generate awareness, we also captured the participants’ thoughts and feelings after the reveal. Here are some of most poignant reactions:

Discussing the State of Data Control on the Web Today

Over the next two days, in that same glass house, German technology and privacy experts, Hamburg’s Digital Media Women group, the Mozilla community and people interested in the topic of online privacy came together to discuss the State of Data and Control on the Web.

We kicked-off with a panel discussion. Moderated by Svenja Teichmann, founder and Managing Director of crowdmedia, German data protection experts spoke about various aspects of online privacy protection and questions like “What is private nowadays?” while passersby could look over their shoulders through the glass walls.

Glass House: Panel DIscussion on Online PrivacyFrom left to right: Lars Reppesgaard (Author, “The Google Empire”), Svenja Teichmann (crowdmedia), Frederick Richter (Chairman German Data Protection Foundation) and Winston Bowden (Sr. Manager Firefox Product Marketing)

Frederick Richter pointed to the user’s uncertainty: “On the Web we are not aware of who is watching us. And many people can’t protect their privacy online, because they don’t have easy features to use.” Lars Reppesgaard is not fundamentally against tracking but thinks users must have a choice: “If you want the technology to help you, it has to collect data sometimes. But for most users it’s not obvious when and by whom they are tracked.” When it came to the new Tracking Protection feature in Private Browsing on Firefox, Winston Bowden emphasized: “We are not an enemy of online advertising. It’s a legitimate source of income and guarantees highly exciting content on the Web. But tracking users without them knowing or tracking them even if they actively decided against it, won’t work. The open and free Web is a valuable asset, which we should protect. Users have to be in control of their data.”

Educating and Engaging

Finally, German Mozilla community members joined the event to inform and educate people about how Firefox can help users gain control over their online experience. They explained the background and genesis of Tracking Protection but also showed tools such as Lightbeam and talked about Smart On Privacy and Web Literacy programs to offer better insight into how the Web works.

Glass House: Community Engagement Thanks to all who worked behind the scenes and/or came to Hamburg and made this event possible. We appreciate your help educating and advocating for people about their choice and control over online privacy.

Mozilla Addons BlogDecember 2015 Featured Add-ons

Pick of the Month: Fox Web Security

by Oleksandr
Fox Web Security is designed to automatically block known dangerous websites and unwanted content that is not suitable for children.

“This add-on is extremely fast and effective! You can say goodbye to porno sites, scams and viruses—now my web is absolutely safe.”

Featured: YouTube™ Flash-HTML5

by A Ulmer
YouTube™ Flash-HTML allows you to play YouTube Videos in Flash or HTML5 player.

Featured: AdBlock for YouTube™

by AdblockLite
AdBlock for YouTube™ removes all ads from YouTube.

Featured: 1-Click YouTube Video Download

by The 1-Click YouTube Video Download Team
The simplest YouTube Video Downloader for all YouTube Flash sites, period.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Karl DubostCSS prefixes and gzip compression

I was discussing with Mike how some Web properties are targeting only WebKit/Blink browsers (for their mobile sites) to the point that they do not add the standard properties for certain CSS features. We see that a lot in Japan, for example, but not only.

We often see things like this code:

.nBread{
    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;
}

which is easily fixed by just adding the necessary properties:

.nBread{
    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;
    display: flex;
    align-items: center;
    justify-content: center;
}

It would make the Web site more future resilient too.

gzip Compression and CSS

Adding standard properties costs a couple of bytes more in the CSS. Mike wondered if the compression would be interesting when it's about adding the standard property because of compression patterns:

#foo {
-webkit-box-shadow: 1px 1px 1px red;
box-shadow: 1px 1px 1px red;
}

Pattern of compression for a CSS file

It seems to be working. With Mike's idea I was wondering if the order was significative. So I tested by adding additional properties and changing the order

mike.prefix.css

#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;
}

mike.both.css

#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;
box-shadow:1px 1px 1px red;
}

mike.both-order.css

#foo {
-webkit-box-shadow: 1px 1px 1px red;
background-color: #fff;
box-shadow:1px 1px 1px red;
}

then doing similar tests than Mike.

Pattern of compression for a CSS file

Obviously the order matters, because it helps gzip to find text patterns to compress.

  • raw: 70 compressed:  98 gzip -c mike.prefix.css | wc -c
  • raw: 98 compressed: 100 gzip -c mike.both.css | wc -c
  • raw: 98 compressed: 106 gzip -c mike.both-order.css | wc -c

Flexbox and Gradients Drawbacks

For things like -webkit- flexbox and gradients, it doesn't help very much, because the syntaxes are very different (see the first piece of code in this post), but for properties were the standard properties is just about removing the prefix, the order matters. It would be interesting to test that on real long CSS files and not just a couple of properties.

Otsukare!

Mozilla Addons BlogDe-coupling Reviews from Signing Unlisted Add-ons

tl;dr – By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews.

Over the past few days, there have been discussions around the first step of the add-on signing process, which involves a programmatic review of submissions by a piece of code known as the “validator”. The validator can trigger a manual review of submissions for a variety of reasons and halt the signing process, which can delay the release of an add-on because of the signing requirement that will be enforced in Firefox 43 and later versions.

There has been debate over whether the validator is useful at all, since it is possible for a malicious player to write code that bypasses it. We agree the validator has limitations; the reality is we can only detect what we know about, and there’s an awful lot we don’t know about. But the validator is only one component of a review process that we hope will make it easier for developers to ship add-ons, and safer for people to use them. It is not meant to be a catch-all malware detection utility; rather, it is meant to help developers get add-ons into the hands of Firefox users more expediently.

With that in mind, we are going to remove validation as a gating mechanism for unlisted add-ons. We want to make it easier for developers to ship unlisted add-ons, and will perform reviews independently of any signing process. By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews. This date is contingent on how quickly we can make the technical, procedural, and policy changes required to support this. The add-ons signing API, introduced earlier this month, will allow for a completely automated signing process, and will be used as part of this solution.

We’ll continue to require developers to adhere to the Firefox Add-ons policies outlined on MDN, and would ask that they ensure their add-ons conform to those polices prior to submitting them for signing. Developers should also be familiar with the Add-ons Reviewer Guide, which outlines some of the more popular reasons an add-on would fail a review and be subject to blocklisting.

I want to thank everyone for their input and insights over the last week. We want to make sure the experience with Firefox is as painless as possible for Add-on developers and users, and our goals have never included “make life harder”, even if it sometimes seems that way. Please continue to speak out, and feel free to reach out to me or other team members directly.

I’ll post a more concrete overview of the next steps as they’re available, and progress will be tracked in bug 1229197. Thanks in advance for your patience.

kev

Chris AtLeeMozLando Survival Guide

MozLando is coming!

I thought I would share a few tips I've learned over the years of how to make the most of these company gatherings. These summits or workweeks are always full of awesomeness, but they can also be confusing and overwhelming.

#1 Seek out people

It's great to have a (short!) list of people you'd like to see in person. Maybe somebody you've only met on IRC / vidyo or bugzilla?

Having a list of people you want to say "thank you" in person to is a great way to approach this. Who doesn't like to hear a sincere "thank you" from someone they work with?

#2 Take advantage of increased bandwidth

I don't know about you, but I can find it pretty challenging at times to get my ideas across in IRC or on an etherpad. It's so much easier in person, with a pad of paper or whiteboard in front of you. You can share ideas with people, and have a latency/lag-free conversation! No more fighting AV issues!

#3 Don't burn yourself out

A week of full days of meetings, code sprints, and blue sky dreaming can be really draining. Don't feel bad if you need to take a breather. Go for a walk or a jog. Take a nap. Read a book. You'll come back refreshed, and ready to engage again.

That's it!

I look forward to seeing you all next week!

Air MozillaWebdev Extravaganza: December 2015

Webdev Extravaganza: December 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Chris H-CTo-Order Telemetry Dashboards: dashboard-generator

Say you’ve been glued to my posts about Firefox Telemetry. You became intrigued by the questions you could answer and ask using actual data from actual users, and considered writing your own website using the single-API telemetry-wrapper.

However, you aren’t a web developer. You don’t like JavaScript. Or you’re busy. Or you don’t like reading READMEs on GitHub.

This is where dashboard-generator can step in to help out. Simply visit the website and build-your-own dash to your exacting specifications:

dashgenForBLog

Choose your channel, version, and metric. “-Latest-” will ensure that the generated dashboard will always use the latest version in the selected channel when you reload that page. Otherwise, you might find yourself always looking at GC_MS values from beta 39.

If you are only interested in clients reporting from a particular application, operating system, or with a certain E10s setting then make your choices in Filters.

If you want a histogram like telemetry.mozilla.org’s “Histogram Dashboard” then make sure you select Histogram and then choose if you want the ends of the histogram trimmed, whether (and how sensibly) you want to compare clients across particular settings, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

If you want an evolution plot like telemetry.mozilla.org’s “Evolution Dashboard” then select Evolution. From there, choose whether to use the build date or submission date of samples, how many versions back from the selected one you would like to graph the values over, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

Your choices made, click “Add to Dashboard”. Then choose again! And again!

Make a mistake? Don’t worry, you can remove rows using the ‘-‘ buttons.

Not sure what it’ll look like when you’re done? Hit ‘Generate Dashboard’ and you’ll get a preview in CodePen showing what it will look like and giving you an opportunity to fiddle with the HTML, CSS, and JS.

codepenForBlog

When you see something you like in the CodePen, hit ‘Save’ and it’ll give you a URL you can use to collaborate with others, and an option to ‘Export’ the whole site for when you want to self-host.

If you find any bugs or have any requests, please file an issue ticket here. I’ll be using it to write an E10s dashboard in the near term, and hope you’ll use it, too!

:chutten


Mozilla FundraisingMozilla’s New Donation Form Features

We’ve been redoing our donation form for this end of year campaign, and have a couple major changes. We’ve talked this in a previous post. Stripe Our first, and probably biggest change is using Stripe to accept non-PayPal donations. This … Continue reading

Jan de MooijTesting Math.random(): Crushing the browser

(For tl;dr, see the Conclusion.)

A few days ago, I wrote about Math.random() implementations in Safari and (older versions of) Chrome using only 32 bits of precision. As I mentioned in that blog post, I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+. V8 has been using the same algorithm since last week. (Update Dec 1: WebKit is now also using XorShift128+!)

The most extensive RNG test is TestU01. It's a bit of a pain to run: to test a custom RNG, you have to compile the library and then link it to a test program. I did this initially for the SpiderMonkey shell but after that I thought it'd be more interesting to use Emscripten to compile TestU01 to asm.js so we can easily run it in different browsers.

Today I tried this and even though I had never used Emscripten before, I had it running in the browser in less than an hour. Because the tests can take a long time, it runs in a web worker. You can try it for yourself here.

I also wanted to test window.crypto.getRandomValues() but unfortunately it's not available in workers.

Disclaimer: browsers implement Math functions like Math.sin differently and this can affect their precision. I don't know if TestU01 uses these functions and whether it affects the results below, but it's possible. Furthermore, some test failures are intermittent so results can vary between runs.

Results

TestU01 has three batteries of tests: SmallCrush, Crush, and BigCrush. SmallCrush runs only a few tests and is very fast. Crush and especially BigCrush have a lot more tests so they are much slower.

SmallCrush

Running SmallCrush takes about 15-30 seconds. It runs 10 tests with 15 statistics (results). Here are the number of failures I got:

Browser Number of failures
Firefox Nightly 1: BirthdaySpacings
Firefox with XorShift128+ 0
Chrome 48 11
Safari 9 1: RandomWalk1 H
Internet Explorer 11 1: BirthdaySpacings
Edge 20 1: BirthdaySpacings

Chrome/V8 failing 11 out of 15 is not too surprising. Again, the V8 team fixed this last week and the new RNG should pass SmallCrush.

Crush

The Crush battery of tests is much more time consuming. On my MacBook Pro, it finishes in less than an hour in Firefox but in Chrome and Safari it can take at least 2 hours. It runs 96 tests with 144 statistics. Here are the results I got:

Browser Number of failures
Firefox Nightly 12
Firefox with XorShift128+ 0
Chrome 48 108
Safari 9 33
Internet Explorer 11 14

XorShift128+ passes Crush, as expected. V8's previous RNG fails most of these tests and Safari/WebKit isn't doing too great either.

BigCrush

BigCrush didn't finish in the browser because it requires more than 512 MB of memory. To fix that I probably need to recompile the asm.js code with a different TOTAL_MEMORY value or with ALLOW_MEMORY_GROWTH=1.

Furthermore, running BigCrush would likely take at least 3 hours in Firefox and more than 6-8 hours in Safari, Chrome, and IE, so I didn't bother.

The XorShift128+ algorithm being implemented in Firefox and Chrome should pass BigCrush (for Firefox, I verified this in the SpiderMonkey shell).

About IE and Edge

I noticed Firefox (without XorShift128+) and Internet Explorer 11 get very similar test failures. When running SmallCrush, they both fail the same BirthdaySpacings test. Here's the list of Crush failures they have in common:

  • 11 BirthdaySpacings, t = 2
  • 12 BirthdaySpacings, t = 3
  • 13 BirthdaySpacings, t = 4
  • 14 BirthdaySpacings, t = 7
  • 15 BirthdaySpacings, t = 7
  • 16 BirthdaySpacings, t = 8
  • 17 BirthdaySpacings, t = 8
  • 19 ClosePairs mNP2S, t = 3
  • 20 ClosePairs mNP2S, t = 7
  • 38 Permutation, r = 15
  • 40 CollisionPermut, r = 15
  • 54 WeightDistrib, r = 24
  • 75 Fourier3, r = 20

This suggests the RNG in IE may be very similar to the one we used in Firefox (imported from Java decades ago). Maybe Microsoft imported the same algorithm from somewhere? If anyone on the Chakra team is reading this and can tell us more, it would be much appreciated :)

IE 11 fails 2 more tests that pass in Firefox. Some failures are intermittent and I'd have to rerun the tests to see if these failures are systematic.

Based on the SmallCrush results I got with Edge 20, I think it uses the same algorithm as IE 11 (not too surprising). Unfortunately the Windows VM I downloaded to test Edge shut down for some reason when it was running Crush so I gave up and don't have full results for it.

Conclusion

I used Emscripten to port TestU01 to the browser. Results confirm most browsers currently don't use very strong RNGs for Math.random(). Both Firefox and Chrome are implementing XorShift128+, which has no systematic failures on any of these tests.

Furthermore, these results indicate IE and Edge may use the same algorithm as the one we used in Firefox.

The Servo BlogThis Week In Servo 43

In the last two weeks, we landed 165 PRs in the Servo organization’s repositories.

The huge news from the last two weeks is that after some really serious efforts from across the team and community to handle the libc changes required, we’ve upgraded Rust compiler versions! This change is more exciting than usual because it switches us from our custom Rust compiler and onto the nightlies produced by the Rust team. The following upgrade was really quick!

Now that we have separate support for making try builds, we have added dzbarsky, ecoal95, KiChjang, ajeffrey, and waffles. Please nominate your local friendly contributor today!

Notable additions

  • notriddle made GitHub look better
  • ms2ger ran rustfmt and began cleaning up our code
  • bholley landed type system magic for the layout wrapper
  • frewsxcv implemented a compile time url parsing macro
  • dzbarsky implemented currentColor for Canvas
  • pcwalton improved ipc error reporting
  • simonsapin removed string-cache’s plugin usage
  • mbrubeck fixed hit testing for iframe content
  • jgraham and crzytrickster did lots of webdriver work
  • evilpie implemented the document.domain getter
  • waffles improved the feedback when trying to open a missing file
  • mfeckie added “last modified” information to our “good first PR” aggregator, Servo Starters
  • frewsxcv landed compile-time URL parsing
  • kichjang provided MIME types for file:// URLs
  • pcwalton split the engine into multiple sandboxed processes

New Contributors

Screenshots

Screencast of this post being submitted to Hacker News:

(screencast)

Meetings

At the meeting two weeks ago we discussed intermittent test failures, using a mailing lists vs. discourse, the libcpocalypse, and our E-Easy issues. There was no meeting last week.

Kartikaya GuptaAsynchronous scrolling in Firefox

In the Firefox family of products, we've had asynchronous scrolling (aka async pan/zoom, aka APZ, aka compositor-thread scrolling) in Firefox OS and Firefox for Android for a while - even though they had different implementations, with different behaviors. We are now in the process of taking the Firefox OS implementation and bringing it to all our other platforms - including desktop and Android. After much hard work by many people, including but not limited to :botond, :dvander, :mattwoodrow, :mstange, :rbarker, :roc, :snorp, and :tn, we finally have APZ enabled on the nightly channel for both desktop and Android. We're working hard on fixing outstanding bugs and getting the quality up before we let it ride the trains out to DevEdition, Beta, and the release channel.

If you want to try it on desktop, note that APZ requires e10s to be enabled, and is currently only enabled for mousewheel/trackpad scrolling. We do have plans to implement it for other input types as well, although that may not happen in the initial release.

Although getting the basic machinery working took some effort, we're now mostly done with that and are facing a different but equally challenging aspect of this change - the fallout on web content. Modern web pages have access to many different APIs via JS and CSS, and implement many interesting scroll-linked effects, often triggered by the scroll event or driven by a loop on the main thread. With APZ, these approaches don't work quite so well because inherently the user-visible scrolling is async from the main thread where JS runs, and we generally avoid blocking the compositor on main-thread JS. This can result in jank or jitter for some of these effects, even though the main page scrolling itself remains smooth. I picked a few of the simpler scroll effects to discuss in a bit more detail below - not a comprehensive list by any means, but hopefully enough to help you get a feel for some of the nuances here.

Smooth scrolling

Smooth scrolling - that is, animating the scroll to a particular scroll offset - is something that is fairly common on web pages. Many pages do this using a JS loop to animate the scroll position. Without taking advantage of APZ, this will still work, but can result in less-than-optimal smoothness and framerate, because the main thread can be busy with doing other things.

Since Firefox 36, we've had support for the scroll-behavior CSS property which allows content to achieve the same effect without the JS loop. Our implementation for scroll-behavior without APZ enabled still runs on the main thread, though, and so can still end up being janky if the main thread is busy. With APZ enabled, the scroll-behavior implementation triggers the scroll animation on the compositor thread, so it should be smooth regardless of load on the main thread. Polyfills for scroll-behavior or old-school implementations in JS will remain synchronous, so for best performance we recommend switching to the CSS property where possible. That way as APZ rolls out to release, you'll get the benefits automatically.

Here is a simple example page that has a spinloop to block the main thread for 500ms at a time. Without APZ, clicking on the buttons results in a very janky/abrupt scroll, but with APZ it should be smooth.

position:sticky

Another common paradigm seen on the web is "sticky" elements - they scroll with the page for a bit, and then turn into position:fixed elements after a point. Again, this is usually implemented with JS listening for scroll events and updating the styles on the elements based on the scroll offset. With APZ, scroll events are going to be delayed relative to what the user is seeing, since the scroll events arrive on the main thread while scrolling is happening on the compositor thread. This will result in glitches as the user scrolls.

Our recommended approach here is to use position:sticky when possible, which we have supported since Firefox 32, and which we have support for in the compositor. This CSS property allows the element to scroll normally but take on the behavior of position:fixed beyond a threshold, even with APZ enabled. This isn't supported across all browsers yet, but there are a number of polyfills available - see the resources tab on the Can I Use position:sticky page for some options.

Again, here is a simple example page that has a spinloop to block the main thread for 500ms at a time. With APZ, the JS version will be laggy but the position:sticky version should always remain in the right place.

Parallax

Parallax. Oh boy. There's a lot of different ways to do this, but almost all of them rely on listening to scroll events and updating element styles based on that. For the same reasons as described in the previous section, implementations of parallax scrolling that are based on scroll events are going to be lagging behind the user's actual scroll position. Until recently, we didn't have a solution for this problem.

However, a few days ago :mattwoodrow landed compositor support for asynchronous scroll adjustments of 3D transforms, which allows a pure CSS parallax implementation to work smoothly with APZ. Keith Clark has a good writeup on how to do this, so I'm just going to point you there. All of his demo pages should scroll smoothly in Nightly with APZ enabled.

Unfortunately, it looks like this CSS-based approach may not work well across all browsers, so please make sure to test carefully if you want to try it out. Also, if you have suggestions on other methods on implementing parallax so that it doesn't rely on a responsive main thread, please let us know. For example, :mstange created one at http://tests.themasta.com/transform-fixed-parallax.html which we should be able to support in the compositor without too much difficulty.

Other features

I know that there are other interesting scroll-linked effects that people are doing or want to do on the web, and we'd really like to support them with asynchronous scrolling. The Blink team has a bunch of different proposals for browser APIs that can help with these sorts of things, including things like CompositorWorker and scroll customization. For more information and to join the discussion on these, please see the public-houdini mailing list. We'd love to get your feedback!

(Thanks to :botond and :mstange for reading a draft of this post and providing feedback.)

Gijs KruitboschDid it land?

I wrote a thing to check if your patch landed/stuck. It’s on github because that’s what people seem to do these days. That means you can use it here:

Did it land?

The “point” of this mini-project is to be able to easily determine whether bug X made today’s nightly, or if bug Y landed in beta 5. Sometimes non-graph changelogs, such as are most accessible on hgweb, can be misleading (ie beta 5 was tagged after you landed, but on a revision before you landed…), plus it’s boring to look up revisions manually in a bug, and then look them up on hgweb, and then try to determine if revision A is in the ancestry tree for revision B. So I automated it.

Note that the tool doesn’t:

  • deal cleverly with backouts. It’ll give you revision hashes from the bug, but if it notices comments that seem to indicate something got backed out, it will be cautious about saying “yes, this landed”. If you know that you bounced once but the last revision(s) is/are definitely “enough” to have the fixes be considered “landed”, then you can just switch to looking up a revision instead of a bug, copy-paste the last hash, and try that one. With a bit of work it could probably expose the internal data about which commits landed before a nightly in the UI – the data is there!
  • use hg to extract the bug metadata. It’s dumb and just asks for a bug’s comments from bugzilla. Pull requests or other help about how to do this “properly” welcome.
  • deal cleverly with branching. If you select aurora/beta, it will look for commits that landed on aurora/beta, not for commits that landed on “earlier” trees and made their way down to aurora/beta with the regular train. This is not super hard to fix, I think, but I haven’t gotten around to it, and I don’t think it will be a very common case.
  • have a particularly nice UI. Feel free to send me pull requests to make it look better.

Andreas TolfsenWebDriver update from TPAC 2015

I came back from the TPAC (the W3C’s Technical Plenary/Advisory Committee meeting week) earlier this month, where I attended the Browser Tools- and Testing Working Group’s meetings on WebDriver.

Unlike previous meetings, this was the first time we had a reasonably up-to-date specification text to discuss. That was clearly not a bad idea to have because we were able to make some defining decisions on long-standing, controversial topics. This shows how important it is for assigned action items to be completed in time before a specification meeting, and to have someone with time dedicated to working on the spec.

Visibility

The WG decided to punt the element visibility, or “displayedness” concept, to level 2 of the specification and in the meantime push for better visibility primitives in the platform. I’ve previously outlined in detail the reasons why it’s not just a bad idea—but impossible—for WebDriver to specify this concept. Instead we will provide a non-normative description of Selenium’s visibility atom in an appendix to give some level of consistency for implementors.

Fortunately Selenlium’s visibility approximation atom can be implemented entirely in content JavaScript, which means it can be provided in both client bindings and as extension commands.

This does not mean we are giving up on visibility. There is general agreement in the WG that it is a desirable feature, but since it’s impossible to define naked eye visibility using existing platform APIs we call upon other WGs to help outline this. Visibility of elements in viewport is not a primitive that naturally fits within the scope of WebDriver.

Our decision has implications for element interactability, which is used to determine if you can interact with an element. This previously relied on the element visibility algorithm, but as an alternative to the tree traversal visibility algorithm we dismissed, we are experimenting with a somewhat naïve hit-testing alternative that takes the centre coordinates of the portion of the element inside the viewport and calls elementsAtPoint, ignoring elements that are opaque.

Attributes and properties

We had previously decided to make two separate commands for getting attributes and properties. This was controversial because it deviates from the behaviour of Selenium’s getAttribute, that conflates the DOM concepts of attributes and properties.

Because the WG decided to stick with David Burns’s proposal on special-casing boolean attributes, the good news is that the Selenium behaviour can be emulated using WebDriver primitives.

In practice this means that when Get Element Attribute is called for an element that carries a boolean attribute, this will return a string "true", rather than the DOM attribute value which would normally be an empty string. We return a string so that dynamically typed programming languages can evaluate this into something truthful, and because there is a belief in the WG that an empty string return value for e.g. <input disabled>, would be confusing to users.

Because we don’t know which attributes are boolean attributes from the DOM’s point of view, it’s not the cleanest approach since it means we must maintain a hard-coded list in WebDriver. It will also arguably cause problems for custom elements, because it is not given that they mirror the default attribute values.

Test suite

One of the requirements for moving to REC is writing a decent test suite. WebDriver is in the fortunate position that it’s an evolution of existing implementations, each with their own body of tests, many of whom we can probably re-purpose. One of the challenges with the existing tests is that the harness does not easily allow for testing the lower level details of the protocol.

So far I have been able to make a start with merging Microsoft’s pending pull requests. Not all the tests merged match what the specification mandates any longer, but we decided to do this before any substantial harness work is done, to eliminate the need for Microsoft to maintain their own fork of Web Platform Tests.

Onwards

Microsoft and Mozilla are both working on implementations, so there is a pressing need for a test suite that reflects the realities of the specification. Vital chapters, such as Element Retrieval and Interactions, are either undefined or in such a poor state that they should be considered unimplementable.

Despite these reservations, I’d say the WebDriver spec is in a better state than ever before. At TPAC we also had meetings about possible future extensions, including permissions and how WebDriver might help facilitate testing of WebBluetooth as well as other platform APIs.

The WG is concurrently pushing for WebDriver to be used in Web Platform Tests to automate the “non-automatable” test cases that require human interaction or privileged access. In fact, there’s an ongoing Quarter of Contribution project sponsored by Mozilla to work on facilitating WebDriver in a sort of “meta-circular” fashion, directly from testharness.js tests.

But more on that later. (-:

Air MozillaAt your service! Practical uses of Service Workers (in Spanish)

At your service! Practical uses of Service Workers (in Spanish) Los Service Workers representan uno de los más novedosos y revolucionarios conceptos de la Web. Desde el equipo de Firefox OS tratamos de desentrañar el...

This Week In RustThis Week in Rust 107

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • Diesel. A safe, extensible ORM and Query Builder for Rust.
  • Chomp. Fast parser combinator library for Rust.
  • libkeccak-tiny. A tiny implementation of SHA-3, SHAKE, Keccak, and sha3sum in Rust.
  • Waitout. Simple interface for tracking and awaiting the completion of multiple asynchounous tasks.

Updates from Rust Core

69 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • androm3da
  • ebadf
  • Ivan Stankovic
  • Jack Fransham
  • Jeffrey Seyfried
  • Josh Austin
  • Kevin Yeh
  • Matthias Bussonnier
  • Philipp Matthias Schäfer
  • xd1le

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is Chrono, a crate that offers very handy timezone-aware Duration and Date/Time types.

Thanks to Ygg01 for the suggestion. Submit your suggestions for next week!

Emma IrwinRevisiting the Word ‘Recognition’ in #FOSS and the Dream of Open Credentials

I think a lot about ways we can better surface Participation as real-world offering for professional and personal development.

And this tweet from Laura  triggered all kinds of thinking.

Most thinking was reminiscent at first. 

Working on open projects teaches relevant skills, helps establish mentorship relationships and surfaces hidden strengths and talents. It’s my own story.

And then reflective..

The reason we’ve struggled to make participation a universally recognized opportunity for credential building, is our confusion over the term ‘recognition’. In Open Source we use this term to mean of similar, yet entirely different meanings:

* Gratitude (“hey thanks for that !”)

* You’re making progress (“great work, keep going! “)

* Appreciation (“we value you”)

* You completed or finished something (congratulations you did it!)

In my opinion, many experiments with badges for FOSS participation have actually compounded the problem: If I am issued a badge I didn’t request( and I have many of these) , or don’t value ( I have many of these too) we’re using the process as a prod and not as a genuine acknowledgement of accomplishment.  That’s OK, gamification is OK – but it’s not credential building in the real-world sense, we need to separate these two ‘use cases’ to move forward with open credentials. 

And I kept thinking…

The Drupal community already does a good job at helping people surface real-world credentials.  Drupal.org member profiles expose contribution and community leadership, while  business profiles  demonstrate (and advertise) their commitment through project sponsorship, and contribution.  Drupal also has this fantastic series of project ladders which I’ve always thought would be a great way to experiment with badges, designing connected learning experiences through participation.  Drupal ladders definitely inspired my own work with around a ‘Participation Standard‘ , and I wonder how projects can work together a bit more on defining a standard for  ‘Distributed Recognition’ even between projects like Mozilla, Drupal and Fedora.  

And the relentless thinking continued…

I then posed the question in our Discourse — asking what ‘Open Credentials’ could look like for Participation at Mozilla . And there are some great responses so far, including solutions like Makerbase and   reminder of of how hard it current is to be ‘seen’ in the Mozilla community, and thus how important this topic actually is.

 Open_Certification_-_Participation_-_Mozilla_Discourse_-_2015-11-29_17.19.40

 

 

 

 

 

 

 

And the thinking will continue, hopefully as a growing group ….

What I do know is that we have to stop using the word recognition as the catch all, and that there is huge opportunity to build Open Credentials through Participation and leadership framework might be a way to test what that looks like.

If you have opinions – would love to have you join our discussion thread!

image by jingleslenobel CC by-NC-ND 2.0

FacebookTwitterGoogle+Share

Robert O'CallahanEven More rr Replay Performance Improvements!

While writing my last blog post I realized I should try to eliminate no-op reschedule events from rr traces. The patch turned out to be very easy, and the results are impressive:

Now replay is faster than recording in all the benchmarks, and for Mochitest is about as fast as normal execution. (As discussed in my previous post, this is probably because the replay excludes some code that runs during normal execution: the test harness and the HTTP server.) Hopefully this turns into real productivity gains for rr users.

Adam RoachBetter Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users accounts for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from addons.mozilla.org, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the addons.mozilla.org website.

Andy McKayDocumentation debt

There's lots of talk about technical debt, but documentation debt is just as real and similar. Every line of documentation written needs maintaining and keeping up to date... and the chances are that over time it will slowly become more and more outdated and useless.

This does harm when the documentation actively misleads people, causing them to make wrong decisions and costing them time. You've probably all seen a person come onto a mailing list or group chat wondering why something doesn't work and getting frustrated. Followed by the answer "oh that documentation is out of date".

That frustration is real and can be harmful to your project.

  • Avoid documenting stuff that doesn't need to be documented, especially if it is documented elsewhere. For example: if your project is on GitHub and follows standard practices, you shouldn't really need to document that commit process.

  • Avoid the trap of "it might be useful to someone". It just might however taking that to its extreme means you can't distinguish what to document. In code terms this is similar to "let's make this an Adpater/Factory/Engine/Class of boggling complexity because in the future someone might want to..." problem.

  • Review your documentation and be merciless about cutting things.

  • Apply a review process to your documentation.. Wiki's are fine for collaboration and spontaneity but that might not be suitable for your projects documentation. One example is to use source control tools to store your documentation and apply a similar review process.

  • Finally, if a document needs critical information, putting it at the top of the page is pointless if the page runs for more than one screen length. For example consider this page vs this page, both are deprecated.

Just as you spend time reviewing technical debt, I recommend reviewing and cleaning documentation debt too.

John O'Duinn“Distributed” ER#3 now available!

Book Cover for DistributedEarlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.

Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)

ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.

Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to get back to typing. ER#4 is coming soon!

John.

Robert O'Callahanrr Replay Performance Improvements

I've been spending a lot of time using rr, as have some other Mozilla developers, and it occurred to me a small investment in speeding up the debugging experience could pay off in improved productivity quite quickly. Until recently no-one had ever really done any work to speed up replay, so there was some low-hanging fruit.

During recording we avoid trapping from tracees to the rr process for common syscalls (read, clock_gettime and the like) with an optimization we call "syscall buffering". The basic idea is that the tracee performs the syscall "untraced", we use a seccomp-bpf predicate to detect that the syscall should not cause a ptrace trap, and when the syscall completes the tracee copies its results to a log buffer. During replay we do not use seccomp-bpf; we were using PTRACE_SYSEMU to generate a ptrace trap for every syscall and then emulating the results of all syscalls from the rr process. The obvious major performance improvement is to avoid generating ptrace traps for buffered syscalls during replay, just as we do during recording.

This was tricky to do while preserving our desired invariants that control flow is identical between recording and replay, and data values (in application memory and registers) are identical at all times. For example consider the recvmsg system call, which takes an in/out msg parameter. During recording syscall wrappers in the tracee would copy msg to the syscall log buffer, perform the system call, then copy the data from the log buffer back to msg. Hitherto, during replay we would trap on the system call and copy the saved buffer contents for that system call to the tracee buffer, whereupon the tracee syscall wrappers would copy the data out to msg. To avoid trapping to rr for a sequence of such syscalls we need to copy the entire syscall log buffer to the tracee before replaying them, but then the syscall wrapper for recvmsg would overwrite the saved output when it copies msg to the buffer! I solved this, and some other related problems, by introducing a few functions that behave differently during recording and replay while preserving control flow and making sure that register values only diverge temporarily and only in a few registers. For this recvmsg case I introduced a function memcpy_input_parameter which behaves like memcpy during recording but is a noop during replay: it reads a global is_replay flag and then does a conditional move to set the source address to the destination address during replay.

Another interesting problem is recapturing control of the tracee after it has run a set of buffered syscalls. We need to trigger some kind of ptrace trap after reaching a certain point in the syscall log buffer, without altering the control flow of the tracee. I handled this by generating a large array of stub functions (each only one byte, a RET instruction) and after processing the log buffer entry ending at offset O, we call stub function number O/8 (each log record is at least 8 bytes long). rr identifies the last log entry after which it wants to stop the tracee, and sets a breakpoint at the appropriate stub function.

It took a few late nights and a couple of half-days of debugging but it works now and I landed it on master. (Though I expect there may be a few latent bugs to shake out.) The results are good:

This shows much improved replay overhead for Mochitest and Reftest, though not much improvement on Octane. Mochitest and Reftest are quite system-call intensive so our optimization gives big wins there. Mochitests spend a significant amount of time in the HTTP server, which is not recorded by rr, and therefore zero-overhead replay could actually run significantly faster than normal execution, so it's not surprising we're already getting close to parity there. Octane replay is dominated by SCHED context-switch events, each one of which we replay using relatively expensive trickery to context-switch at exactly the right moment.

For rr cognoscenti: as part of eliminating traps for replay of buffered syscalls, I also eliminated the traps for the ioctls that arm/disarm the deschedule-notification events. That was relatively easy (just replace those syscalls with noops during replay) and actually simplified code since we don't have to write those events to the trace and can wholly ignore them during replay.

There's definitely more that can be squeezed out of replay, and probably recording as well. E.g. currently we record a SCHED event every time we try to context-switch, even if we end up rescheduling the thread that was already running (which is common). We don't need to do that, and eliminating those events would reduce syscallbuf flushing and also the number of ptrace traps taken during replay. This should hugely benefit Octane. I'm trying to focus on easy rr improvements with big wins that are likely to pay off for Mozilla developers in the short term; it's difficult to know whether any given improvement is in that category, but I think SCHED elision during recording probably is. (We used to elide recorded SCHED events during replay, but that added significant complexity to reverse execution so I took it out.)

Chris AtLeeFirefox builds on the Taskcluster Index

RIP FTP?

You have have heard rumblings that FTP is going away...

61319299.jpg

Over the past few quarters we've been working to migrate our infrastructure off of the ageing "FTP" [1] system to Amazon S3.

We've maintained some backwards compatibility for the time being [2], so that current Firefox CI and release builds are still available via ftp.mozilla.org, or preferably, archive.mozilla.org since we don't support the ftp protocol any more!

Our long term plan is to make the builds available via the Taskcluster Index, and stop uploading builds to archive.mozilla.org

How do I find my builds???

65722041.jpg

This is pretty big change, but we really think this will make it easier to find the builds you're looking for.

The Taskcluster Index allows us to attach multiple "routes" to a build job. Think of a route as a kind of hierarchical tag, or directory. Unlike regular directories, a build can be tagged with multiple routes, for example, according to the revision or buildid used.

A great tool for exploring the Taskcluster Index is the Indexed Artifact Browser

Here are some recent examples of nightly Firefox builds:

The latest win64 nightly Firefox build is available via the
gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt route

This same build (as of this writing) is also available via its revision:

gecko.v2.mozilla-central.nightly.revision.47b49b0d32360fab04b11ff9120970979c426911.firefox.win64-opt

Or the date:

gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt

The artifact browser is simply an interface on top of the index API. Using this API, you can also fetch files directly using wget, curl, python requests, etc.:

https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt/artifacts/public/build/firefox-45.0a1.en-US.win64.installer.exe [3]

Similar routes exist for other platforms, for B2G and mobile, and for opt/debug variations. I encourage you to explore the gecko.v2 namespace, and see if it makes things easier for you to find what you're looking for! [4]

Can't find what you want in the index? Please let us know!

[1]A historical name referring back to the time when we used the FTP prototol to serve these files. Today, the files are available only via HTTP(S)
[2]in fact, all Firefox builds right now are currently uploaded to S3. we've just had to implement some compatibility layers to make S3 appear in many ways like the old FTP service.
[3]yes, you need to know the version number...for now. we're considering stripping that from the filenames. if you have thoughts on this, please get in touch!
[4]ignore the warning on the right about "Task not found" - that just means there are no tasks with that exact route; kind of like an empty directory

Jan de MooijMath.random() and 32-bit precision

Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.

The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).

While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak. (Update Dec 1: WebKit is now also using XorShift128+, so this doesn't apply to newer Safari/WebKit versions.)

Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.

I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.

Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:

And a Firefox build with XorShift128+:

In Chrome (before Math.random was fixed):

And in Safari:

These pics clearly show the difference in precision.

Conclusion

Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.

Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.

Support.Mozilla.OrgWhat’s up with SUMO – 27th November

Hello, SUMO Nation!

Have you had a good week so far? We hope you have! Here are a few pertinent updates from the world of SUMO for your reading pleasure.

Welcome, new contributors!

…at least that’s the only one we know of! So, if you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Scribe & Phoxuponyou – for their constant contributions on the support forum – cheers!
  • Costenslayer – for offering to help us with cloning our YT videos to AirMo – thanks!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 30th of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Developers

Community

Support Forum

Firefox

  • for Desktop
    • GTK+3 is required for Firefox on Linux as of Beta 43 (the full release comes on the 15th of December).
  •  for iOS
    • Version 1.2 is out – please go ahead and test it on your devices!
    • Version 2.0 is going to happen in 2016 (we just don’t know when exactly… yet), and should add synchronization from iOS to Desktop and/or Android.
  • Firefox OS
    • Peace and quiet makes for the start of a good weekend!
Thank you for reading all the way to the end! We hope you join us on Monday (and beyond that day), and wish you a great, relaxing weekend. Take it easy and stay foxy!

Mozilla FundraisingA/B Test: Three-page vs One-page donation flow

Here are the results of our first A/B test from our 2015 End of Year fundraising campaign. Three page flow (our control) In our control (above) credit card donations are processed (via Stripe) from within our user interface, in a … Continue reading

Dustin J. MitchellRemote GPG Agent

Private keys should be held close -- the fewer copies of them, and the fewer people have access to them, the better. SSH agents, with agent forwarding, do a pretty good job of this. For quite a long time, I've had my SSH private key stored only on my laptop and desktop, with a short script to forward that agent into my remote screen sessions. This works great: while I'm connected and my key is loaded, I can connect to hosts and push to repositories with no further interaction. But once I disconnect, the screen sessions can no longer access the key.

Doing the same for GPG keys turns out to be a bit harder, not helped by the lack of documentation from GnuPG itself. In fact, as far as I can tell, it was impossible before GnuPG 2.1, and a great deal more difficult before OpenSSH 6.7.

I don't want exactly the same thing, anyway: I only need access to my GPG private keys once every few days (to sign a commit, for example) So I'd like to control exactly when I make the agent available.

The solution I have found involves this shell script, named remote-gpg:

#! /bin/bash

set -e

host=$1
if [ -z "$host" ]; then
    echo "Supply a hostname"
    exit 1
fi

# remove any existing agent socket (in theory `StreamLocalBindUnlink yes` does this,
# but in practice, not so much)
ssh $host rm -f ~/.gnupg/S.gpg-agent
ssh -t -R ~/.gnupg/S.gpg-agent:.gnupg/S.gpg-agent-extra $host \
    sh -c 'echo; echo "Perform remote GPG operations and hit enter"; \
        read; \
        rm -f ~/.gnupg/S.gpg-agent'; 

The critical bit of configuration was to add the following to .gnupg/gpg-agent.conf on my laptop and desktop:

extra-socket /home/dustin/.gnupg/S.gpg-agent-extra

and then kill the agent to reload the config:

gpg-connect-agent reloadagent /bye

The idea is this: the local GPG agent (on the laptop or desktop) publishes this "extra" socket specifically for forwarding to remote machines. The set of commands accepted over the socket is limited, although it does include access to the key material. The SSH command then forwards the socket (this functionality was added in OpenSSH 6.7) to the remote host, after first deleting any existing socket. That command displays a prompt, waits for the user to signal completion of the operation, then cleans up.

To use this, I just open a new terminal or local screen window and run remote-gpg euclid. If my key is not already loaded, I'm prompted to enter the passphrase. GPG even annotates the prompt to indicate that it's from a remote connection. Once I've finished with the private keys, I go back to the window and hit enter.

Air MozillaParticipation Call, 26 Nov 2015

Participation Call The Participation Call helps connect Mozillians who are thinking about how we can achieve crazy and ambitious goals by bringing new people into the project...

Air MozillaReps weekly, 26 Nov 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Armen ZambranoMozhginfo/Pushlog client released

Hi,
If you've ever spent time trying to query metadata from hg with regards to revisions, you can now use a Python library we've released to do so.

In bug 1203621 [1], our community contributor @MikeLing has helped us release the pushlog.py module we had written for Mozilla CI tools.

You can find the pushlog_client package in here [3] and you can find the code in here [4]

Thanks MikeLing!

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1203621
[2] https://github.com/MikeLing
[3] https://pypi.python.org/pypi/pushlog_client
[4] https://hg.mozilla.org/hgcustom/version-control-tools/rev/6021c9031bc3


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Andy McKayAdd-ons at Mozlando

If you are going to Orlando for the Mozilla summit and want to talk add-ons, we want to talk to you. But, if you look at the schedule we haven't scheduled a whole pile of meetings, yet there are 430 meetings (by my quick count) scheduled.

In fact we've got one meeting I'd like people who are interested to learn about add-ons to come to, the add-ons open house and demos. We'll be talking road map, 2016 planning, a few demos and then getting a chat going. You should come.

If you want to talk with us on any add-ons subject any other time we'd love to talk and I'm sure we can work around each others schedule. You can find me in online or wander into the Firefox home room and look for the add-ons sign (yes I plan on making one). We can set up quick ad-hoc meetings on pretty much anything.

There's a reason for that, the productivity and happiness I've encountered at work weeks is the related to the inverse of the number of meetings I have. At Portland I was triple booked at one point. At Whistler I had few meetings and most of the ones I did have were relaxed, outdoors and in the sun.

Whistler ended up being a much more positive experience for me and my team.

So here's the plan for my team:

  • meet and interact with the rest of our team
  • learn what people outside our group are doing
  • learn where Mozilla is going
  • don't feel under pressure to attend any meetings

We'll be hacking on code when we are hanging out, the criteria being:

  • nothing that is on a critical path
  • the hack must involve working with other people
  • don't feel under pressure to complete that code

That's about it, just let the team flow, don't hold it back with meetings.

Because let's face it, if you want to have a meeting on a subject, we can do that any time with video conferencing. I look forward to seeing you there.

Nick CameronMacro hygiene in all its guises and variations

Note, I'm not sure of the terminology for some of this stuff, so I might be making horrible mistakes, apologies.

Usually, when we talk about macro hygiene we mean the ability to not confuse identifiers with the same name but from different contexts. This is a big and interesting topic in it's own right and I'll discuss it in some depth later. Today I want to talk about other kinds of macro hygiene.

There is hygiene when naming items (I've heard this called "path hygiene", but I'm not sure if that is a standard term). For example,

mod a {  
    fn f() {}

    pub macro foo() {
        f();
    }
}

a::foo!();  

The macro use will expand to f(), but there is no f in scope. Currently this will be a name resolution error. Ideally, we would remember the scope where the call to f came from and look up f in that scope.

I believe that switching our hygiene algorithm to scope sets and using the scope sets for name resolution solves this issue.

Privacy hygiene

In the above example, f is private to a, so even if we can name it from the expansion of foo, we still can't access it due to its visibility. Again, scope sets comes to the rescue. The intuition is that we check privacy from the scope used to find f, not from its lexical position. There are a few more details than that, but nothing that will make sense before explaining the scope sets algorithm in detail.

Unsafety hygiene

The goal here is that when checking for unsafety, whether or not we are allowed to execute unsafe code depends on the context where the code is written, not where it is expanded. For example,

unsafe fn foo(x: i32) {}

macro m1($x: expr) {  
    foo($x)
}

macro m2($x: expr) {  
    $x
}

macro m3($x: expr) {  
    unsafe {
        foo($x)
    }
}

macro m4($x: expr) {  
    unsafe {
        $x
    }
}

fn main() {  
    foo(42); // bad
    unsafe {
        foo(42);  // ok
    }
    m1(42); // bad
    m2(foo(42)); // bad
    m3(42); // ok
    m4(foo(42)); // bad
    unsafe {
        m1(42); // bad
        m2(foo(42)); // ok
        m3(42); // ok
        m4(foo(42)); // ok
    }
}

We could in theory use the same hygiene information as for the previous kinds. But when checking unsafety we are checking expressions, not identifiers, and we only record hygiene info for identifiers.

One solution would be to track hygiene for all tokens, not just identifiers. That might not be too much effort since groups of tokens passed together would have the same hygiene info. We would only be duplicating indices into a table, not more data than that. We would also have to track or be able to calculate the safety-status of scopes.

Alternatively, we could introduce a new kind of block into the token tree system - a block which can't be written by the user, only created by expansion or procedural macros. It would affect precedence but not scoping. Such a block is also the solution to having interpolated AST in the token stream - we just have tokens wrapped in the scope-less block. Such a block could be annotated with its safety-status. We would need to track unsafety during parsing/expansion to make this work. We have something similar to this in the HIR where we can push/pop unsafe blocks. I believe we want an absolute setting here rather than push/pop though, and we also don't want to introduce new scoping.

We could follow the current stability solution and annotate spans, but this is a bit of an abuse of spans, IMO.

I'm not super-happy with any of these solutions.

Stability hygiene

Finally, stability. We would like for macros in libraries with access to unstable code to be able to access unstable code when expanded. This is currently supported in Rust by having a bool on spans. We can probably continue to use this system or adapt either of the solutions proposed for unsafety hygiene.

It would be nice for macros to be marked as stable and unstable, I believe this is orthogonal to hygiene though.

Mozilla Addons BlogAdd-ons Update – Week of 2015/11/25

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 758 add-ons were reviewed:

  • 602 (79%) were reviewed in less than 5 days.
  • 32 (4%) were reviewed between 5 and 10 days.
  • 124 (16%) were reviewed after more than 10 days.

There are 281 listed add-ons awaiting review, and 189 unlisted add-ons awaiting review. I should note that this is an unusually large number of unlisted add-ons, which is due to a mass uploading by a developer with 100+ add-ons.

Review times for most add-ons have improved recently  due to more volunteer activity. Add-ons that are admin-flagged or very complex are now getting much needed attention, thanks to a new contractor reviewer. There’s still a fairly large review backlog to go through.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 43 Compatibility

This compatibility blog post is now public. The bulk compatibility validation should be run soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Changes in let and const in Firefox 44

Firefox 44 includes some breaking changes that you should all be aware of. Please read the post carefully and test your add-ons on Nightly or the newest Developer Edition.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to turn on enforcement by default in Firefox 43.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code.

This is the time to test your add-ons and make sure they continue working in Firefox. We’re holding regular office hours to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Web Extensions

If you read the post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. You can track progress of its development in http://www.arewewebextensionsyet.com/.

Air MozillaQuality Team (QA) Public Meeting, 25 Nov 2015

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaBugzilla Development Meeting, 25 Nov 2015

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Chris H-CHow Mozilla Pays Me

When I told people I was leaving BlackBerry and going to work for Mozilla, the first question was often “Who?”

(“The Firefox people, ${familyMember}” “Oh, well why didn’t you say so”)

More often the first question (and almost always the second question for ${familyMember}) was “How do they make their money?”

When I was working for BlackBerry, it seemed fairly obvious: BlackBerry made its money selling BlackBerry devices. (Though obvious, this was actually incorrect, as the firm made its money more through services and servers than devices. But that’s another story.)

With Mozilla, there’s no clear thing that people’s minds can latch onto. There’s no doodad being sold for dollarbucks, there’s no subscriber fee, there’s no “professional edition” upsell…

Well, today the Mozilla Foundation released its State of Mozilla report including financials for calendar 2014. This ought to clear things up, right? Well…

The most relevant part of this would be page 6 of the audited financial statement which shows that, roughly speaking, Mozilla makes its money thusly (top three listed):

  • $323M – Royalties
  • $4.2M – Contributions (from fundraising efforts)
  • $1M – Interest and Dividends (from investments)

Where this gets cloudy is that “Royalties” line. The Mozilla Foundation is only allowed to accrue certain kinds of income since it is a non-profit.

Which is why I’m not employed by the Foundation but by Mozilla Corporation, the wholly-owned subsidiary of the Mozilla Foundation. MoCo is a taxable entity responsible for software development and stuff. As such, it can earn and spend like any other privately-held concern. It sends dollars back up the chain via that “Royalties” line because it needs to pay to license wordmarks, trademarks, and other intellectual property from the Foundation. It isn’t the only contributor to that line, I think, as I expect sales of plushie Firefoxen and tickets to MozFest factor in somehow.

So, in conclusion, rest assured, ${conceredPerson}: Mozilla Foundation has plenty of money coming in to pay my…

Well, yes, I did just say I was employed by Mozilla Corporation. So?

What do you mean where does the Corporation get its money?

Fine, fine, I was just going to gloss over this part and sway you with those big numbers and how MoCo and MoFo sound pretty similar… but I guess you’re too cunning for that.

Mozilla Corporation is not a publicly-traded corporation, so there are no public documents I can point you to for answers to that question. However, there was a semi-public statement back in 2006 that confirmed that the Corporation was earning within an order of magnitude of $76M in search-related partnership revenue.

It’s been nine years since then. The Internet has changed a lot since the year Google bought YouTube and MySpace was the primary social network of note. And our way of experiencing it has changed from sitting at a desk to having it in our pockets. Firefox has been downloaded over 100 million times on Android and topped some of the iTunes App Store charts after being released twelve days ago for iOS. If this sort of partnership is still active, and is somewhat proportional to Firefox’s reach, then it might just be a different number than “within an order of magnitude of $76M.”

So, ${concernedPerson}, I’m afraid there just isn’t any more information I can give you. Mozilla does its business, and seems to be doing it well. As such, it collects revenue which it has to filter through various taxes and regulation authorities at various levels which are completely opaque even when they’re transparent. From that, I collect a paycheque.

At the very least, take heart from the Contributions line. That money comes from people who like that Mozilla does good things for the Internet. So as long as we’re doing good things (and we have no plans to stop), there is a deep and growing level of support that should keep me from asking for money.

Though, now that you mention it

:chutten


Air MozillaThe Joy of Coding - Episode 36

The Joy of Coding - Episode 36 mconley livehacks on real Firefox bugs while thinking aloud.

Jan de MooijMaking `this` a real binding in SpiderMonkey

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }
f.call(123); // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }
f.call(123); // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() {
    "use strict";
    () => this; // `this` is 123
}
f.call(123);

And, of course, this can be used inside eval:

function f() {
    "use strict";
    eval("this"); // 123
}
f.call(123);

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base {
    constructor() {
        var arrow = () => this;

        // Runtime error: `this` is not initialized inside `arrow`.
        arrow();

        // Call Base constructor, initialize our `this` value.
        eval("super()");

        // The arrow function now returns the initialized `this`.
        arrow();
    }
}

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() {
    return () => this.foo();
}

We generate bytecode similar to the following pseudo-JS:

function f() {
    var .this = BoxThisIfNeeded(this);
    return () => (.this).foo();
}

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).

Conclusion

We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

Mozilla Addons BlogA New Firefox Add-ons Validator

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on addons.mozilla.org (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at github.com/mozilla/addons-validator. We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

Matjaž HorvatMeet Jarek, splendid Pontoon contributor

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited whatcanidoformozilla.org and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek Śmiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

Emily DunhamGiving Thanks to Rust Contributors

Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:

../../../_images/orglog_deploy_teaser.png

The stats page is interactive and available at http://edunham.github.io/rust-org-stats/. The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and nightli.es so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

Tarek ZiadéShould I use PYTHONOPTIMIZE ?

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

try:
    assert hasattr(SomeObject, 'some_attribute')
    SomeObject.some_attribute()
except AssertionError:
    SomeObject.do_something_else()

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

James LongA Simple Way to Route with Redux

This post took months to write. I wasn't working on it consistently, but every time I made progress something would happen that made me scratch everything. It started off as an explanation of how I integrated react-router 0.13 into my app. Now I'm going to talk about how redux-simple-router came to be and explain the philosophy behind it.

Redux embraces a single atom app state to represent all the state for your UI. This has many benefits, the biggest of which is that pieces of state are always consistent with each other. If we update the tree immutably, it's very easy to make atomic updates to the state and keep everything consistent (as opposed to mutating individual pieces of state over time).

Conceptually, the UI is derived from this app state. Everything needed to render the UI is contained in this state, and this is powerful because you can inspect/snapshot/replay the entire UI just by targeting the app state.

But it gets awkard when you want to work with other libraries like react-router that want to take part in state management. react-router is a powerful library for component-based routing; it inherently manages the routing state to provide the user with powerful APIs that handle everything gracefully.

So what do we do? We could use react-router and redux side-by-side, but then the app state object does not contain everything needed for the UI. Snapshotting, replaying, and all that is broken.

One option is to try to take control over all the router state and proxy everything back to react-router. This is what redux-router attempts to do, but it's very complicated and prone to bugs. react-router may put unserializable state in the tree, thus still breaking snapshotting and other useful features.

After integrating redux and react-router in my site, I extracted my solution to a new project: redux-simple-router. The goal is simple: let react-router do all the work. They have already developed very elegant APIs for implementing routing components, and you should just use them.

If you use the regular react-router APIs, how does it work? How does the app state object know anything about routing? Simple: we already have a serialized form of all the react-router state: the URL. All we have to do is store the URL in the app state and keep it in sync with react-router, and the app state has everything it needs to render the UI.

People think that the app state object has to have everything, but it doesn't. It just has to have the primary state; anything that can be deduced can live outside of redux.

Above, the blue thing is serializable dumb app state, and the green things are unserializable programs that exist in memory. As long as you can recreate the green things above when loading up an app state, you're fine. And you can easily do this with react-router by just initializing it with the URL from the app state.

Since launching it, a bunch of people have already helped improve it in many ways, and a lot of people seem to be finding it useful. Thank you for providing feedback and contributing patches!

Just use react-router

The brilliant thing about just tracking the URL is that it takes almost no code at all. redux-simple-router is only 87 lines of code and it's easy to understand what's going on. You already have a lot of concepts to juggle (react, redux, react-router, etc); you shouldn't have to learn another large abstraction.

Everything you want to do can be done with react-router directly. A lot of people coming from redux-router seem to surprised about this. Some people don't understand the following:

  • Routing components have all the information you need as properties. See the docs; the current location, params, and more are all there for you to use.
  • You can block route transitions with listenBefore.
  • You can inject code to run when a routing component is created with createElement, if you want to do stuff like automatically start loading data.

We should invest in the react-router community and figure out the right patterns for everybody using it, not just people using redux. We also get to use new react-router features immediately.

The only additional thing redux-simple-router provides is a way to change the URL with the updatePath action creator. The reason is that it's a very common use case to update the URL inside of an action creator; you might want to redirect the user to another page depending on the result of an async request, for example. You don't have access to the history object there.

You shouldn't really even be selecting the path state from the redux-simple-router state; try to only make top-level routing components actually depend on the URL.

So how does it work?

You can skip this section if you aren't interested in the nitty-gritty details. We use a pretty clever hack to simplify the syncing though, so I wanted to write about it!

You call syncReduxAndRouter with history and store objects and it will keep them in sync. It does this by listening to history changes with history.listen and state changes with store.subscribe and telling each other when something changes.

It's a little tricky because each listener needs to know when to "stop." If the app state changes, it needs to call history.pushState, but the history listener should see that it's up-to-date and not do anything. When it's the other way around, the history listener needs to call store.dispatch to update the path but the store listener should see that nothing has changed.

First, let's talk about history. How can we tell if anything has changed? We get the new location object so we just stringify it into a URL and then compare it with the URL in the app state. If it's the same, we do nothing. Pretty easy!

Detecting app state changes is a little harder. In previous versions, we were comparing the URL from state with the current location's URL. But this caused tons of problems. For example, if the user has installed a listenBefore hook, it will be invoked from the pushState call in the store subscriber (because the app state URL is different from the current URL). The user might dispatch actions in listenBefore and update other state though, and since we are subscribed to the whole store, our listener will run again. At this point the URL has not been updated yet so we will call pushState again, and the listenBefore hook will be called again, causing an infinite loop.

Even if we could somehow only trigger pushState calls when the URL app state changes, this is not semantically correct. Every single time the user tries to change the URL, we should always call pushState even if the URL is the same as the current one. This is how browsers work; think of clicking on a link to "/foo" even though "/foo" is the current URL: what happens?

In redux, reducers are pure so we cannot call pushState there. We could do it in a middleware (which is what redux-router does) but I really don't want to force people to install a middleware just for this. We could do it in the action creator, but that seems like the wrong time: reducers may respond to the UPDATE_PATH action and update some state, so we shouldn't rerender routing components until after reducing.

I came up with a clever hack: just use an id in the routing state and increment it whenever we want to trigger a pushState! This has drastically simplified everything, made it far more robust, and even better made testing really easy because we can just check that the changeId field is the right number.

We just have to keep track of the last changeId we've seen an compare it in the store subscriber. This means there's always a 1:1 relationship with updatePath action creator calls and pushState calls no matter what. Try any transition logic you want, it should work!

It also simplifies how changes from the router to redux work, because it calls the updatePath action creator with an avoidRouterUpdate flag and all we have to do in the reducer it just not increment changeId and we won't call back into the router.

I think my favorite side effect of this technique is testing. Look at the tests and you'll see I can compare a bunch of changeIds to make sure that the right number of pushState calls are being made.

More Complex Examples of react-router

Originally I was going to walk through how I used react-router for complex use cases like server-side rendering. This post is already too long to go into details, and I don't have time to write another post, so I will leave you with a few points that will help you dig into the code to see how it works:

  • There's no problem making a component both a redux "connected" component and a route component. Here I'm exporting a connected Drafts page will be installed in the router. That means the component can both select from state as well as be controlled by the router.
  • I perform data fetching by specifying a static populateStore function. On the client, the router will call this in createElement seen here , and the backend can prepopulate the store by iterating over all route components and calling this method. The action creators are responsible for checking if the data is already loaded and not re-fetching on the frontend of it's already there (example).
  • The server uses the lower-level match API seen here to get the current route. This gives us flexibility to control everything. We store the current HTML status in redux (like a 500) so that components can change it. For example, the Post component can set a 404 code if the post isn't found. The server sends the page with the right HTML status code.
  • This also means the top-level App component can inspect the status code to see if it should display a special 404 or 500 page.

I really like how the react-router 1.0 API turned out. The idea seems to be use low-level APIs on the server so that you can control everything, but the client can simply render a Router component to automatically handle state. The two environments are different enough that this works great.

That's It

It's my goal to research ideas and present them in a way to help other people. In this case a cool project, redux-simple-router, came out of it. I hope this post explains the reasons behind and the above links help show more complicated examples of using it.

We are working on porting react-redux-universal-hot-example to redux-simple-router, so that will be another example of all kinds of uses. We're really close to finishing it, and you can follow along in this issue.

I'm also going to add more examples in the repo itself. But the goal is that you should be able to just read react-router's docs and do whatever it tells you to do.

Lastly, the folks working on redux-router have put in a lot of good work and I don't mean to diminish that. I think it's healthy for multiple approaches to exist and everyone can learn something from each one.

Nick CameronMacro plans, overview

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros

Mechanism

I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.

Libraries

I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via crates.io.

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on crates.io, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.

Hygiene

I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.

Modularisation

The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.

Syntax

These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => {  
    ...
}

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.

Hygiene

There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

Air MozillaPrivacy for Normal People

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

Armen ZambranoWelcome F3real, xenny and MikeLing!

As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen ZambranoMozilla CI tools meet up

In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room: https://v.mozilla.com/flex.html?roomdirect.html&key=GC1ftgyxhW2y


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMartes mozilleros, 24 Nov 2015

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Kim MoirUSENIX Release Engineering Summit 2015 recap

November 13th, I attended the USENIX Release Engineering Summit in Washington, DC.  This summit was along side the larger LISA conference at the same venue. Thanks to Dinah McNutt, Gareth Bowles, Chris Cooper,  Dan Tehranian and John O'Duinn for organizing.



I gave two talks at the summit.  One was a long talk on how we have scaled our Android testing infrastructure on AWS, as well as a look back at how it evolved over the years.

Picture by Tim Norris - Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)
https://www.flickr.com/photos/tim_norris/2600844073/sizes/o/

Scaling mobile testing on AWS: Emulators all the way down from Kim Moir

I gave a second lightning talk in the afternoon on the problems we face with our large distributed continuous integration, build and release pipeline, and how we are working to address the issues. The theme of this talk was that managing a large distributed system is like being the caretaker for the water, or some days, the sewer system for a city.  We are constantly looking system leaks and implementing system monitoring. And probably will have to replace it with something new while keeping the existing one running.

Picture by Korona Lacasse - Creative Commons 2.0 Attribution 2.0 Generic https://www.flickr.com/photos/korona4reel/14107877324/sizes/l



In preparation for this talk, I did a lot of reading on complex systems design and designing for recovery from failure in distributed systems.  In particular, I read Donatella Meadows' book Thinking in Systems. (Cate Huston reviewed the book here). I also watched several talks by people who talked about the challenges they face managing their distributed systems including the following:
I'd also like to thank all the members of Mozilla releng/ateam who reviewed my slides and provided feedback before I gave the presentations.
    The attendees of the summit attended the same keynote as the LISA attendees.  Jez Humble, well known for his Continuous Delivery and Lean Enterprise books provided a keynote on Lean Configuration Management which I really enjoyed. (Older version of slides from another conference, are available here and here.)



    In particular, I enjoyed his discussion of the cultural aspects of devops. I especially like that he stated that "You should not have to have planned downtime or people working outside business hours to release".  He also talked a bit about how many of the leaders that are looked up to as visionaries in the tech industry are known for not treating people very well and this is not a good example to set for others who believe this to be the key to their success.  For instance, he said something like "what more could Steve Jobs have accomplished had he treated his employees less harshly".

    Another concept he discussed which I found interesting was that of the strangler application. When moving from a large monolithic application, the goal is to split out the existing functionality into services until the originally application is left with nothing.  Exactly what Mozilla releng is doing as we migrate from Buildbot to taskcluster.


    http://www.slideshare.net/jezhumble/architecting-for-continuous-delivery-54192503


    At the release engineering summit itself,   Lukas Blakk from Pinterest gave a fantastic talk Stop Releasing off Your Laptop—Implementing a Mobile App Release Management Process from Scratch in a Startup or Small Company.  This included grumpy cat picture to depict how Lukas thought the rest of the company felt when that a more structured release process was implemented.


    Lukas also included a timeline of the tasks that implemented in her first six months working at Pinterest. Very impressive to see the transition!


    Another talk I enjoyed was Chaos Patterns - Architecting for Failure in Distributed Systems by Jos Boumans of Krux. (Similar slides from an earlier conference here). He talked about some high profile distributed systems that failed and how chaos engineering can help illuminate these issues before they hit you in production.


    For instance, it is impossible for Netflix to model their entire system outside of production given that they consume around one third of nightly downstream bandwidth consumption in the US. 

    Evan Willey and Dave Liebreich from Pivotal Cloud Foundry gave a talk entitled "Pivotal Cloud Foundry Release Engineering: Moving Integration Upstream Where It Belongs". I found this talk interesting because they talked about how the built Concourse, a CI system that is more scaleable and natively builds pipelines.   Travis and Jenkins are good for small projects but they simply don't scale for large numbers of commits, platforms to test or complicated pipelines. We followed a similar path that led us to develop Taskcluster

    There were many more great talks, hopefully more slides will be up soon!

Henrik SkupinSurvey about sharing information inside the Firefox Automation team

Within the Firefox Automation team we were suffering a bit in sharing information about our work over the last couple of months. That mainly happened because I was alone and not able to blog more often than once in a quarter. The same applies to our dev-automation mailing list which mostly only received emails from Travis CI with testing results.

Given that the team has been increased to 4 people now (beside me this is Maja Frydrychowicz, Syd Polk, and David Burns, we want to be more open again and also trying to get more people involved into our projects. To ensure that we do not make use of the wrong communication channels – depending where most of our readers are – I have setup a little survey. It will only take you a minute to go through but it will help us a lot to know more about the preferences of our automation geeks. So please take that little time and help us.

The survey can be found here and is open until end of November 2015:

https://www.surveymonkey.com/r/528WYYJ

Thank you a lot!

Nick CameronMacros pt6 - more issues

I discovered another couple of issues with Rust macros (both affect the macro_rules flavour).

Nested macros and arguments

These don't work because of the way macros do substitution. When expanding a macro, the expander looks for token strings starting with $ to expand. If there is a variable which is not bound by the outer macro, then it is an error. E.g.,

macro_rules! foo {  
    () => {
        macro_rules! bar {
            ($x: ident) => { $x }
        }
        bar!(foo);
    }
}

When we try to expand foo!(), the expander errors out because it can't find a value for $x, it doesn't know that macro_rules bar is binding $x.

The proper solution here is to make macros aware of binding and lexical scoping etc. However, I'm not sure that is possible because macros are not parsed until after expansion. We might be able to fix this by just being less eager to report these errors. We wouldn't get proper lexical scoping, i.e., all macro variables would need to have different names, but at least the easy cases would work.

Matching expression fragments

Example:

macro_rules! foo {  
    ( if $e:expr { $s:stmt } ) => {
        if $e {
            $s
        }
    }
}

fn main() {  
    let x = 1;
    foo! {
        if 0 < x {
            ()
        }
    }
}

This gives an error because it tries to parse x { as the start of a struct literal. We have a hack in the parser where in some contexts where we parse an expression, we explicitly forbid struct literals from appearing so that we can correctly parse a following block. This is not usually apparent, but in this case, where the macro expects an expr, what we'd like to have is 'an expression but not a struct literal'. However, exposing this level of detail about the parser implementation to macro authors (not even procedural macro authors!) feels bad. Not sure how to tackle this one.

Relatedly, it would be nice to be able to match other fragments of the AST, for example the interior of a block. Again, there is the issue of how much of the internals we wish to expose.

(HT @bmastenbrook for the second issue).

Chris FinkeReenact Now Available for Android

I’ve increased the audience for Reenact (an app for reenacting photos) by 100,000% by porting it from Firefox OS to Android.

reenact-android

It took me about ten evenings to go from “I don’t even know what language Android apps are written in” to submitting the .apk to the Google PlayTM store. I’d like to thank Stack Overflow, the Android developer docs, and Android Studio’s autocomplete.

Reenact for Android, like Reenact for Firefox OS, is open-source; the complete source for both apps is available on GitHub. Also like the Firefox OS app, Reenact for Android is free and ad-free. Just think: if even just 10% of all 1 billion Android users install Reenact, I’d have $0!

In addition to making Reenact available on Android, I’ve launched Reenact.me, a home for the app. If you try out Reenact, send your photo to gallery@reenact.me to get it included in the photo gallery on Reenact.me.

You can install Reenact on Google Play or directly from Reenact.me. Try it out and let me know how it works on your device!

Mozilla Security BlogImproving Revocation: OCSP Must-Staple and Short-lived Certificates

Last year, we laid out a long-range plan for improving revocation support for Firefox. As of this week, we’ve completed most of the major elements of that plan. After adding OneCRL earlier this year, we have recently added support for OCSP Must-Staple and short-lived certificates. Together, these changes enable website owners several ways to achieve fast, secure certificate revocation.

In an ideal world, the browser would perform an online status check (such as OCSP) whenever it verifies a certificate, and reject the certificate if the check failed. However, these checks can be slow and unreliable. They time out about 15% of the time, and take about 350ms even when they succeed. Browsers generally soft-fail on revocation in an attempt to balance these concerns.

To get back to stronger revocation checking, we have added support for short-lived certificates and Must-Staple to let sites opt in to hard failures. As of Firefox 41, Firefox will not do “live” OCSP queries for sufficiently short-lived certs (with a lifetime shorter than the value set in “security.pki.cert_short_lifetime_in_days”). Instead, Firefox will just assume the certificate is valid. There is currently no default threshold set, so users need to configure it. We are collecting telemetry on certificate lifetimes, and expect to set the threshold somewhere around the maximum OCSP response lifetime specfied in the baseline requirements.

OCSP Must-Staple makes use of the recently specified TLS Feature Extension. When a CA adds this extension to a certificate, it requires your browser to ensure a stapled OCSP response is present in the TLS handshake. If an OCSP response is not present, the connection will fail and Firefox will display a non-overridable error page. This feature will be included in Firefox 45, currently scheduled to be released in March 2016.

Mozilla Addons BlogTest your add-ons for Multi-process Firefox compatibility

You might have heard the news that future versions of Firefox will run the browser UI separately from web content. This is called Multi-process Firefox (also “Electrolysis” or “e10s”), and it is scheduled for release in the first quarter of 2016.

If your add-on code accesses web content directly, using an overlay extension, a bootstrapped extension, or low-level SDK APIs like window/utils or tabs/utils, then you will probably be affected.

To minimize the impact on users of your add-ons, we are urging you to test your add-ons for compatibility. You can find documentation on how to make them compatible here.

Starting Nov. 24, 2015, we are available to assist you every Tuesday in the #addons channel at irc.mozilla.org. Click here to see the schedule. Whether you need help testing or making your add-ons compatible, we’re here to help!

Emily DunhamPSA: Docker on Ubuntu

PSA: Docker on Ubuntu

$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the docker.io system package.

$ sudo apt-get install docker.io
$ which docker
/usr/bin/docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

Daniel Stenbergcopy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.

firefox-copy-as-curl

From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.

chrome-copy-as-curl

On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:

firefox-cliget

This Week In RustThis Week in Rust 106

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • nom 1.0 is released.
  • Freepass. The free password manager for power users.
  • Barcoders. A barcode encoding library for the Rust programming language.
  • fst. Fast implementation of ordered sets and maps using finite state machines.
  • Rusty Code. Advanced language support for the Rust language in Visual Studio Code.
  • Dybuk. Prettify the ugly Rustc messages (inspired by Elm).
  • Substudy. Use SRT subtitle files to study foreign languages.

Updates from Rust Core

99 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Alexander Bulaev
  • Ashkan Kiani
  • Devon Hollowood
  • Doug Goldstein
  • Jean Maillard
  • Joshua Holmer
  • Matthias Kauer
  • Ole Krüger
  • Ravi Shankar

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is nom, a library of fast zero-copy parser combinators, which has already been used to create safe, high-performance parsers for a number of formats both binary and textual. nom just reached version 1.0, too, so congratulations for both the major version and the CotW status!

Thanks to Reddit user gbersac for the nom-ination! Submit your suggestions for next week!