# Brackets – a great editor for Javascript, HTML and CSS

http://brackets.io/

Besides being great, it also has a fast release cycle (2.5 weeks) and is open source. It is being developed by folks at Adobe (plus others from the OSS comunity).

Favorite features:

• Nice hinting and inline code helpers for CSS.
• Live editing of HTML and CSS – you can fire-up a browser (currently only Chrome) that is updated as soon as you change the code, no need for F5. It also highlights the HTML your cursor is at.
• Very lightweight. Yet, it has everything you need to start working on a web project. You just install it.
• It comes with JSLint built-in that immediately bugs you about your Javascript pitfalls. Other IDEs I’ve used leave it to you to set up that kind of stuff.
• Extensible. The extensions I’ve installed include CSS color hinting, and a context-menu to open a file’s folder in the OS’s file system thingy (explorer, in Windows).

Weaknesses: a few basic editor features (such as code folding or more intelligent search) are lacking, but the basics are good enough to be very productive. Extension management is currently done via manual file copying, but according to the dev blogs, they’re working on an extension manager.

Platform support: currently built only for Windows and Mac OSX.

Tip: if you’re using Live Development on a site that uses AJAX to access it’s own web server, you need to either enable cross-site requests on your development web server, or (easier) tell brackets where your development web server is serving your html/js/css from – it’s under File…Project Settings. See here for more.

# Windows 8 is the worst.

Don’t install Windows 8. Don’t buy a laptop that has it.

I tried. I installed windows 8 on my personal laptop. It was so cheap – $15 to upgrade – I decided to give it a shot. What a mistake. If you do anything at all with your computer, don’t use Windows 8. For a tablet – maybe, I don’t know. For a computer – no way. Just say no. I’m not a hater or a flamer, I use almost only Microsoft products all day long (Windows 7 and Visual Studio most of my work day). I like Windows 7. Here’s a very short list of stuff that kills me (there are many other problems): 1. Windows 8 restarts your machine WHILE YOU ARE USING IT to finish installing updates. It warns you but never gives you the chance to prevent it. You could be in the middle of reviewing an emergency patient’s diagnosis, and *BOOM* machine shuts down and is unusable for at least 5 minutes. You have no option to save your work. You have no option to stop this from happening. 2. The “metro UI” or “new start screen” or “tile area” is horribly confusing because it hides everything else. You can’t see what’s running at a glance. You can’t change this behavior. 3. The “charms” bar requires you to point your mouse to a magical location. You can’t change this. You can’t disable this (and make the freakin’ bar always visible) 4. There is no bloody CLOCK or calendar on the default screen, that annoying “start screen”. You can’t add one without installing a third-party “windows store app”. 5. The control panel is split between the “classic” control panel and some super-simplified – yet non-overlapping – set of options in the “charms” settings area. Go figure where to fix something. Too bad, Microsoft really screwed this one up. p.s. some search engine food: windows 8 sucks, windows 8 is bad, windows 8 is horrible, terrible, nasty, don’t use windows 8, don’t install windows 8, keep windows 7, windows 7 is better than windows 8. Google, you get the point. # Penalty search, part 4 and finale: the x-section algorithm # What’s the problem? This is the last post in a series dedicated to exploring the problem of “penalty search”. Penalty search involves an ordered array: the first part of the array has values that do not satisfy some property, while the second part – the rest of the values – do satisfy the property. Testing a value for the property carries an index-dependent penalty, given by a cost function. We want to find the first value (lowest index) in the array that satisfies the property. The “total cost” of a search attempt is the total penalty spent looking for the value until it is found. The goal is to devise an algorithm that searches the array with minimal expected total cost. # Why is this interesting? Besides being fun to wrestle with, this problem is interesting because it deals with a realistic physical situation: you need to find out the ideal timing for some process. Any time-dependent experiment can be modeled by this problem. Every time you try, you spend time, so the cost of the test is time being spent. The goal is to find the solution as quickly as possible, so we want to minimize the expected time spent searching for a solution. # Linear cost – recap The last post explored how a linear cost function affects the recursive difference equation that calculates the expected cost. Linear cost is interesting because first, it is easier to analyze, and second, because it fits problems where the penalty is the amount of time spent waiting to check a time-sensitive process. Analytically, the nice property of linear cost is that the size of the array being searched is what determines the cost (up to a constant), not the specific range of indexes. In that last post we took advantage of this translational quasi-invariance of a linear cost function to simplify the problem. After defining a couple of help functions we ended with the following equation: $h(N)=N(at_N+b)+h(t_N+1)+h(N - t_N - 1)+a(N - t_N - 1)(t_N+1)$ Where: • $N$ is the size of the region (array) that’s being searched • $h(N)$ represents the expected cost of a region (almost – you need to divide by $N$ to get the actual cost) • $t_N$ is the index that our algorithm decides to test in the current step • and $at_N+b$ is the cost of testing at index $t_N$ # Constant ratio search For linear cost, we expect the solution – the optimal search algorithm – to be somewhat similar to binary section search: we always pick a constant ratio (“section”) of the current range to pick our next search target. Although I can’t prove that this is a valid assumption, I think it’s interesting enough to see what the ratio should be if we do assume constant ratio search. To begin, we define $t_N := \lfloor rN \rfloor$ where $r \in [0,1)$ (remember we are already assuming that the region is $[0,N-1]$). To make things simpler (though admittedly less precise) we’ll drop the floor notation and treat $rN$ as if it were an integer. Substituting in the last equation, we get: $h(N)=N(arN +b)+h(rN +1)+h(N - rN - 1)+a(N- rN - 1)(rN +1)$ With a little simplification (saving time using Maxima) we get: $h(N)=h(rN+1)+h((1-r)N-1)+(2ar-a r^2)N^2+(-2ar+b+a)N-a$ Notice that the two recursive references to the function $h$ nicely represent testing the left and right sub-regions around the split point $rN$. I’m personally not any closer to solving that equation, so here’s where we use approximations. In the last post we saw that for $N=1$ the value must be zero. We can use this information to solve $N=2$: $h(2)=h(2r+1)+h(2(1-r)-1)+4(2ar-ar^2)+2(a+b-2ar)-a$ Following the same logic as in that last post, we know that we can only split a region of size 2 into two subregions of size 1. This means $h$ must be referencing recursively calls to $h(1)$ twice. This limits the range of $r$: $\lfloor 2r \rfloor +1=1$ and $2- \lfloor 2r \rfloor -1=1$ Implying: $\lfloor 2r \rfloor =0 \implies r < 1/2$ Here we have a first result: we must split somewhere lower than the middle. Less than makes sense if our linear curve is increasing, such that $a>0$. I didn’t have time to go back and check, but I’m guessing I somewhere made that assumption implicitly along the way. It also makes sense: if the cost is increasing linearly, the upper bound for a constant split ratio is the middle – which is exactly binary section when cost is constant (not increasing at all, $a=0$). It’s an interesting exercise to plug values for $N$ and see where the bounds on $r$ go. I’m leaving that for another day, or for you to try. That’s all. If you find interesting instances of this problem, let me know! # Penalty search, part 3: exploring linear cost ## Recap The previous post – a second in a series discussing optimal search under test penalty – concluded with an open question. We can write an equation to describe the expected cost, which we want to minimize the following by picking the best $t_i$ for each interval $[x_i,y_i]$: $E[cost(x_i,y_i)] = c(t_i) + E[cost(x_{i+1},y_{i+1})]$ The open question is: how? $c(t_i)$ is the test cost (penalty) function. Different values have different costs, so we expect that the optimal search will take into account the different costs and be smart about picking which values to test. As we saw previously, for constant cost the algorithm is simply binary section search. We have a recursive function of two variables (the range – $x_i, y_i$) which we want to minimize by picking yet another function (the one that picks $t_i$ given the current range). I don’t know how to solve this generally, which is why I resorted to manual calculations and pretty graphs. In that same previous post I used dynamic programming – a natural problem for Haskell – to find the optimal decision tree for a specific range and cost function. Thanks to Control.Parallel it was easy to improve performance by taking advantage of today’s multi-core processors. What I would really like, though, is to actually find a closed-form solution, one that defines the optimal search algorithm. ## Disclaimer I’m no mathematician (nor a computer science wizard). If you think there’s a mistake anywhere, let me know. ## When you don’t know, make the problem simpler So what to do when you don’t know how to solve a problem? Make more assumptions.TM Hopefully, we’ll learn something from the solution of the simpler problem. ### Uniform Probability The first assumption we can make is that of uniform probability (of the search target, in the range). Here’s our equation: $E[cost(x_i,y_i)] = c(t_i) + E[cost(x_{i+1},y_{i+1})]$ The range $[x_{i+1},y_{i+1}]$ is what our algorithm decides to search in the next step, and $t_i$ is the current searched value. We expand the expectation $E[]$ of the next test (next level in the decision tree). For clarity I’ll dump the indexes since we don’t need them now: $E[cost(x,y)] = c(t) + P_{x,y}(s \le t)E[cost(x,t)] + P_{x,y}(s > t)E[cost(t+1,y)]$ Where $P_{x,y}(A) := P(A | s \in [x,y])$ is a conditional probability. From now on I’ll skip writing the expectation $E$. Using our assumption of uniform probability we can write: $cost(x,y) = c(t) + \frac{1}{y-x+1} \left ((t-x+1)cost(x,t) + (y-t)cost(t+1,y) \right )$ Notice that now our problem is mostly a function of range size, so we can define $N_{x,y}:=(y-x+1)$ and write: $cost(x,y) = c(t) + \frac{1}{N_{x,y}}(N_{x,t}cost(x,t) + N_{t+1,y}cost(t+1,y))$ Multiply by $N_{x,y}$ to get: $N_{x,y}cost(x,y) = N_{x,y}c(t) + N_{x,t}cost(x,t) + N_{t+1,y}cost(t+1,y)$ Being a non-mathematician, here’s where I said “Aha!”. We can make things simpler if we define: $g(x,y) := N_{x,y}cost(x,y)$ Since$g$is just$cost$multiplied by$y-x+1$, minimizing$g$will also minimize$cost$. Using this definition the equation becomes: $g(x,y) = N_{x,y}c(t) + g(x,t) + g(t+1,y)$ …which is beginning to look a lot more manageable. And remember – so far, the only assumption we’ve made is that of uniform probability. But how to proceed? We still have a recursive equation with two variables, x and y. ### From two variables to one Someone must have once said: “one is better than two.” Variables. In a recursive function. Let’s add another assumption – that the cost function is linear: $c(t) = at+b$ Where $a$ and $b$ are arbitrary constants. Why linear? This cost function has the nice property that the cost of any search tree is invariant under translation (of the searched range) up to a constant addition. That is, $cost(x,y)+a u=cost(x+u,y+u)$ or $g(x,y)+N_{x,y} a u = g(x+u,y+u)$ How do I know this? Here’s an informal proof. A search tree consists of nodes representing the values being tested (and cost being spent). If we translate the underlying range, we’re in effect changing the linear cost function by adding a constant that depends on the amount of translation. So the cost of each node in the search tree will vary by the same constant, leaving the distribution of cost throughout the tree equivalent and keeping the expected cost minimal compared to other trees on the same range. This property means that given a range size $N$ we can find the best search tree for just one translation of that range (say, $x=0$) and it will be correct for any other range of the same size. Plugging in the linear cost in the equation from before gives us: $g(x_i,y_i) = N_i a t_i + N_i b + g(x_i,t_i) + g(t_i+1,y_i)$ I’m explicitly writing that the choice of $t$ depends on $x,y$, just to make things clearer. We’ll use the indexed notation $t_i$ to signify that $t$ depends on the $i$th range, and use it also for the range size, that is $N_i := N_{x_i,y_i}$. Now we can use the translation invariance to replace $x_i$ everywhere with 0. At the same time we can get rid of $y_i$ since it will be equivalent to make the function depend on $N_i=y_i+1$. To that end, define: $h(N) := g(0,N-1) = g(x,x+N-1)-Nax = N(cost(x,x+N-1)-ax)$ Then, the equation above for $g(x_i,y_i)$ becomes: $h(N)=Nat_N+Nb+h(t_N+1)+h(N - t_N - 1)+a(N - t_N - 1)(t_N+1)$ That last term is due to the required translation of the $h(N-t-1)$ from the origin to $t+1$ (the original value was $g(t_i+1,y_i)$. ## Results please… Let’s look at the actual value of the expected cost for a few small N’s (region sizes): ### N=1 When the size of the region, $N$ is 1, there is nothing to check – by assumption the target value is in the region and there is only one cell, which must be it. Since there is nothing to check, there is no cost, so$cost(0,1)=0=h(1)$. Also, we don’t care what the offset (translation) of this one-celled region is – so we don’t need to add the additive term when calculating translated calls such as cost(x,x+1) because the value is always 0. ### N=2 Let’s move on to $N=2$. For a region of two cells, the only logical algorithm is to check the first cell. If it yields the value 0, then the target (first post-step cell) is cell #2, otherwise (value at first cell is 1) – the target is cell #1. So the expected cost is just the cost of the first cell – which has index 0 – which is$0a+b=b=cost(0,1). That’s what simple logic dictates. Now let’s see how our equation fairs: $h(2) = 2at+2b+h(t+1)+(h(2-t-1)+a(2-t-1)(t+1))$ Remember that $t \in [0,N-1]$, so in this case t can only have the value 0 (matches our logic that we can only test cell 0). Since the second recursion ($h(2-t-1)$) turns out to be a 1-celled region, we delete the additive term and replace it with 0: $h(2) = 2b+h(1)+h(1)=2b$ Since $h(2)=2cost(0,1)=2b$ the result matches our “manual logic” from above. ### N=3 Here’s where things start to get interesting. $N=3$ is the first case where we have to compare different values for $t$ to find out which one yields the minimal expected cost. The possible values are 0 and 1: • $t=0 \Rightarrow h(3)=3b+h(1)+h(2)+2a=3b+0+2b+2a=2a+5b$ • $t=1 \Rightarrow h(3)=3a+3b+h(2)+h(1)+0=3a+3b$ So which is less? We should pick $t=0$ iff $2b \le a$. Otherwise, pick $t=1$. This is interesting because it means that the optimal search algorithm depends on how our linear cost is defined. If the slope $a$ is “twice more dominant” than the constant $b$, then we should bias our search towards 0. Otherwise, we go with binary section and take the middle cell with index 1. # What’s next We could spend the rest of our lives calculating larger and larger values of N. What we really want, though, is a general, closed-form solution for any N. In my next post I’ll make another weakening assumption that will make it possible. # Penalty-search problem revisited: Dynamic Programming & Control.Parallel to the rescue My last post was about searching with a test penalty. The problem boils down to minimizing a recursive difference equation. Since I didn’t (and still don’t) have a closed-form solution, I had to base any conclusions on brute-force tests that find the best tree. The run time was dreadfully slow so I could calculate the solution for no more than a 15-cell array. For small arrays and a small cost constant, the solution for linear cost is just a half-interval search tree. After writing that post I realized several things and here are the new results. As suggested by several people, the solution can be found more efficiently using dynamic programming. I’ve implemented a revised version of the code that finds the solution by recursively finding it for subtrees. It runs much faster after this change. The DP approach also eliminates the space leak – the program uses very little memory. Also, I’ve added parallelization with a tiny bit of code using parMap to cut the execution time by a factor of 4 (on my core i5). It’s pretty cool how easy it is to parallelize purely functional code! After these changes, calculating an array one cell larger takes almost exactly three times longer. Here’s my theory for why three times longer. Since we split the array at the test index and have two branches for each possible root node, an array of size n requires solving the subtrees for arrays of sizes 1 through n-1, twice [see note 1]. So, relative to the n-1 case we have at least twice as much work (for the two extremities that have n-1 sized sub-arrays). Additionally we have the twice the sum from 2 to n-1 of 2^-x amount of work to do (relative to the n-1 case) for the smaller sub-arrays, which converges very quickly to 1. In total, we need to do 3 times as much work when using the DP approach. With the improved code I’ve calculated the solutions for slightly larger arrays. Here are the results for the range [0,17] (an array of size 18): Constant cost – binary section search Linear cost – slight bias towards lower indices Quadratic cost – strong bias towards lower indices ## Revised recommendation for confectionery research Ok, I’ll skip the soufflé analogy this time. Assuming a monotonically increasing cost function, a good strategy is what you’d expect: prioritize searching in the lower-costing area, using some sort of biased interval search (rather than half interval). ## Open question still… I’m still looking for a closed-form solution to the cost equation. For those who didn’t read the last post, the problem is to find the search tree that minimizes the expected cost of a search on an array, where each test carries a cost (given by a cost function). To solve the problem, one must minimize the following value by picking the optimal test point t depending on the current range [x,y]: \large \inline \dpi{120} \begin{align*} E[cost(x,y)] = c(t)&+P(t \ge s | s \in [x,y])cost(x,t)\\ &+ P(t The simplifying assumptions were uniform distribution of the search target, and linear cost: \large&space;\inline&space;\dpi{120}&space;\begin{align*}&space;E[cost(x,y)]&space;&=&space;(c_0&space;t+c_1)\\&space;&+\frac{(t-x+1)cost(x,t)&space;+(y-t)cost(t+1,y)}{y-x+1}&space;\end{align*} Another possible simplification: assume that the search algorithm uses a constant ratio to pick the next search target. That is, for a range [x,y] the next search target will always be c * (y – x + 1) (rounded down) where c is a constant, between 0 and 1. I’ve toyed around with this one but ended up with the same basic problem – how to solve recursive equation? The code is now up on github for your doubtless enjoyment. Comments welcome! 1. The algorithm is a bit smarter – there’s no need to test the last array index, so the number of tests is slightly reduced. # Finding the perfect baking time for that soufflé (or: optimal search with test penalty) ## TL;DR How to find a value in a range of integers where every test carries a cost with pretty graphs at the end. ## Don’t ruin that soufflé You know that problem. If you take it out of the oven too soon, and you get a raw egg; leave it in for too long, and you’ll end up with a fat, flat omelet (or is it omelette?). A soufflé is like Schrödinger’s cat: if you open the oven to observe, it will collapse from the good-bad superposition into a slobbery glob of goo. So what to do? The real answer is to get a good cookbook or tips from someone who knows. But let’s not bother with reality here: there’s an interesting problem hiding here that’s worth exploring. Let’s assume you can only make one soufflé at a time, and that if you open the oven to check, you can’t put it back in. It will either be underdone, exactly ready, or overdone. Also let’s assume that the ideal time to bake a soufflé is a constant, integer number of minutes. You need to find out quickly what’s the best baking time because you are hosting a party in a few hours and want to be prepared. Finally, let’s assume that we tried to make one and after baking it for 30 minutes found that it was overdone. So what’s the fastest way to find out the perfect baking time for a soufflé? ## So here’s the actual question To state the problem more generally: we have a test, a boolean function of integers f(t), that tells us whether a desired value s is less than the tested value t. Or, in Haskell: f :: Integer -> Bool f t = t >= s Every time we do the test (evaluate f(t)) we pay a penalty – in the case of a soufflé, we waste time: we have to restart the process and start baking a new one. Let’s call this penalty the cost function, and let it depend on the value we are testing. For our simple purposes, let the function be from integers to integers. cost :: Integer -> Integer Finally, we point out that s, that value we’re trying to find, is a random variable with probability function pf for finding s in a range [x,y]. This function satisfies: pf :: Integer -> Integer -> Integer -> Double -- the rule is: pf x y z = sum . map pmf [x..y]

Where pmf (probability mass function) is a distribution on the total range [a,b] where s is known to reside.

pmf :: Integer -> Double

Being a probability mass function, the rule that pmf must hold is:

sum . map pmf $[a..b] = 1 -- or pf a b = 1 The problem is then, to find an algorithm that finds s with minimal expected cost. The average total cost of checking a range [x..y] when testing a value t in the range, can be calculated recursively. Let’s say our algorithm decides which value to test next using a function guessVal(x,y). First we need to define the conditional probability cpf of finding s in a range [x',y'] given the fact that it is know to lie in an extended range [x,y]: cpf :: Integer -> Integer -> Integer -> Integer -> Double cpf x y x' y' = ... -- related to pf and depends on the distribution Then we can write: avgCost :: Integer -> Integer -> Double avgCost x y = if (x == y) then 0 else cost t + (cpf' x t) * (avgCost x t) + (cpf' (t+1) y) * (avgCost (t+1) y) where t = guessVal x y cpf' = cpf x y This assumes that t is actually in the range [x,y] and that s is known to be in that range. If the range is a single integer, there is nothing to check and thus zero cost. Otherwise, the cost is the probability-weighted average of the two next branches (the guessed value yielded a test result false or true). Note that for any distribution, the total of the two cpf values above will always be 1 since we are covering the entire range on which the condition is applied. We need to minimize the expected cost, so the general problem is: Choose guessVal that minimizes avgCost for a given range [a,b]. A minor subtlety to note is that we never need to test the highest value of a range because it will always test positive. This can be expressed as the rule: x <= (guessVal x y) < y ### Simplifying assumption Let’s assume that our probability distribution is uniform. In that case the probability function depends only on the size of the range being tested and a constant that depends on the complete range size: pf x y = (y - x + 1) / (b - a + 1) -- and conditional probability: cpf x y x' y' = (y' - x' + 1) / (y - x + 1) Under these conditions the avgCost function may be rewritten as: avgCost x y = if (x == y) then 0 else cost t + c * ((t - x + 1) * (avgCost x t) + (y - t) * (avgCost (t+1) y)) where t = guessVal x y c = 1.0 / (y - x + 1) In many cases uniform distribution is not a good pick – for example, when we have a pretty good guess about where s lies in the range a normal distribution would be more appropriate. ## Cost functions #### Constant cost In the trivial case, the cost function is just a constant – most trivially, it is always 1. In that case our problem boils down to minimizing the number of tests and is the same as finding a value in an array. The well-known solution to that is binary search. Our average cost is then: avgCost x y = if (x == y) then 0 else 1 + 2 * 1/2 * (avgCost x t) where t = x + (y - x) / 2 This reduces to the expected value of log(n) (rounded up) where n is the size of the complete range. #### Linear cost Unfortunately, when baking a soufflé every attempt has more than a constant cost – you have to wait a certain amount of time to find out if a given duration for baking is too short or too long. If time is of the essence, an appropriate cost function would be linear and increasing, such as cost(t) = t. In that case the equation becomes: avgCost x y = if (x == y) then 0 else t + c * ((t - x + 1) * (avgCost x t) + (y - t) * (avgCost (t+1) y)) where t = guessVal x y c = 1.0 / (y - x + 1) Just a reminder: the problem is to minimize the average cost by picking the right guessVal function – picking the right t every time. If anyone solves this, let me know. ## Visualization As suggested by this answer1, any sensible algorithm is equivalent to a binary decision tree (given a range [a,b]). At each node, the algorithm tests a value x, spending cost(x), to determine which of the branches to take. If we assign a cost value to each node in the tree, minimal expected cost is the weighted average root-to-leaf total cost (where the weights depend on the distribution pmf; for uniform distribution the weights are identical and can be ignored.) The leaves themselves have zero cost because they signify that the result is known. I wrote a small program that finds the best algorithm (decision tree) for a given range and cost function, and prints out the results in graphviz-compatible .dot format. Here they are for constant cost (= binary search), linear and quadratic costs for the range [0,14]. Since I don’t (yet) have a solution for the linear cost case, I was surprised to find that the best algorithm for linear cost is still binary search (update: that’s not true, this was a side effect of testing only small trees – I’ll write a new post about the updated analysis) despite having what is intuitively a considerable bias toward the beginning of the range. For quadratic cost (cost(t) ~= t^2), the best algorithm is quite lopsided as you can see below. It’s possible that there’s a slight bias effect also for linear cost that is only prevalent for larger intervals. Linear cost yields the same graph as constant cost for the range [0,14] Quadratic cost yields a lopsided graph ## The dreaded Space Leak The range is that small because the number of trees to test increases very fast – I could only get up to 14 before running out of memory, which is another story. The implementation is naive: the program builds all the trees before calculating the average cost of each – the right way would be to calculate the cost of each tree and discard it so that the GC could get rid of it when memory gets tight. ## So what about that soufflé? As the visualized in the graph for linear cost above, when time is the issue you should still use binary section search. (update: next post will explain why this is wrong). If the cost is quadratic, give higher priority to lower values. ## Open questions 1. Solve the minimization problem for the recursive equation for avgCost when the cost function is linear: cost(t) = t. 2. Take into consideration other probability distributions. For example, solve for linearly increasing distribution (where larger values are more likely to equal s) and normal distribution around the middle of the range. 3. How do I improve the implementation of the brute force algorithm to not explode in RAM? Let me know when you’re done If you have any comments or insights, I’d be glad to hear them! My family can take a few more soufflé attempts… 1. Since I couldn’t find any information about this problem I asked this question on cstheory.stackexchange.com. I got some backlash from one person about this not being a “research-level question” as dictated by the site’s FAQ… You can check out the other questions on the site to decide for yourself if this question exceeds the maximum dumbness level. # git admin: An alias for running git commands as a privileged SSH identity So, you want to allow some person less-privileged access to your git repository. For example, you don’t want anyone pushing to master except a select few. You’ve got it all figured out with gitolite or whatnot by using SSH public keys as an identity that determines privileges. Your only problem is this: ever so often you need to go over to that person’s computer, sit together on some merge, and push into those higher-privileged repositories. But you can’t: you’re working on his/her computer, if you push you’ll be using their SSH key, and you just don’t have permissions. Wouldn’t it be nice if you could just type: git admin push …then enter your password, and get it done with? Of course it would be nice! Here’s how to do it. It’s not limited to “push” of course – git admin will be a prefix for any command you want to run using the alternative privileges. (And you could make many such aliases, one per SSH key you want to use – though I don’t see a good use case.) First we pick a name for the SSH key that will reside on user’s computers and will be used for this purpose. Let’s call it “admin”. Then: 1. Create an SSH key secured with a passphrase that you (and other admins) will know, and that you will copy to the server and configure as having higher privileges: $ ssh-keygen.exe -f ~/.ssh/admin
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
The key fingerprint is:
ed:1c:45:35:40:e7:e0:a6:58:a8:ce:35:e9:35:bd:00 me@PC

Next – we need to create a bunch of scripts to make this thing modular – let’s put them all in the same place.

2. Create a script called “ssh-as.sh” that runs stuff that uses SSH, but uses a given SSH key rather than the default:

#!/bin/bash
set -e
set -u

ssh -i $SSH_KEYFILE$@

3. Create a script called “git-as.sh” that runs git commands using the given SSH key.

#!/bin/bash
set -e
set -u

SSH_KEYFILE_NAME=$1 SCRIPTS_DIR=$(dirname $0) shift SSH_KEYFILE=$SSH_KEYFILE_NAME GIT_SSH=$SCRIPTS_DIR/ssh-as.sh git$@

4. Finally, add the alias (using something appropriate for “PATH_TO_SCRIPTS_DIR” below):

# Run git commands as the SSH identity provided by the keyfile ~/.ssh/admin
git config --global alias.admin \!"PATH_TO_SCRIPTS_DIR/git-as.sh ~/.ssh/admin"

5. Use it:

git checkout master
...stuff...
# Note: "git push" (to master) would fail due to insufficient privileges
#       (in our office we use gitolite to limit pushing to some branches)

git checkout some_branch
# Note: "git push -f" would fail due to insufficient privileges
#       (we use gitolite to limit history rewrites in all but one's own branches)

git admin ...whatever other command that requires more privileges in gitolite or whatever...

etc.

We use it also for limiting destructive writes (push -f), I even limit myself so that if I want to push to master or do a “push -f” I must use git admin, just to remind myself what I’m doing. We use gitolite and I’ve created two separate identities for myself, one of which has higher privileges and uses the key used by “git admin”.

That’s it!

Update: An alternative approach using SSH .config is given here and basically boils down to adding multiple remotes to your git config using SSH host aliases. Each host alias uses a different SSH key, so to use different privileges one must push/pull to the different remotes. This approach is slightly simpler to set up (no scripts involved) but requires adding a new host alias to SSH for every host and also a new remote in every git repository you set up (may be nuisance if you have many repositories or, less likely, many remote hosts that you work with.)

# cabal install FTGL on Windows, and getting exe’s that use it to work

I wanted to try out the latest version of lamdu, a “live programming” environment (still in early development). It uses graphics-drawingcombinators which in turn depends on FTGL to accomplish its text-drawing-in-OpenGL magic. On linux this isn’t really an issue – a simple ‘cabal install’ does it, at least on the version of ubuntu that I use (EDIT: You’ll probably first need to install the ftgl dev files, e.g. sudo apt-get -y install libftgl-dev)

Windows? No problem! With a few hacks, you’ll be rendering text in no time. This worked on my 64-bit Windows 8 installation but should work on any version since Windows XP.

1. I’m assuming you have the latest Haskell Platform for Windows installed. If not, do it!
3. Copy the FTGL.dll and FreeType.dll to:
1. 64-bit version of Windows: copy to c:\windows\syswow64
2. 32-bit version of Windows: copy to c:\windows\system32
4. Install the Visual C++ 2010 redistributable, 32-bit version
5. Assuming you’ve unpackged the FTGL binaries in some directory “<blabla>\ftgl-2.1.3-vc10-32″, run the following:cabal install ftgl --extra-include-dirs=<blabla>\ftgl-2.1.3-vc10-32\include --extra-lib-dirs=<blabla>\ftgl-2.1.3-vc10-32\lib --reinstall --force-reinstalls
6. cabal build / install whatever executable you wanted to (in my case, lamdu)

That’s it! Hope I saved someone the near-hour I spent figuring this out.

# Custom error pages and WCF exceptions in IIS, and a case of bad UI

Nope, nothing to do with Haskell.

This admittedly boring post is about two specific problems with IIS 7.5 and WCF which wasted so much of my time that I thought it would be worth to  document them.

If you have a WCF service hosted as an ASP.NET application in IIS, you may encounter the following exception on the client side:

System.ServiceModel.ProtocolException: The content type text/html of the response message does not match the content type of the binding (application/x-gzip). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 75 bytes of the response were: ‘The page cannot be displayed because an internal server error has occurred.’.

—> System.Net.WebException: The remote server returned an error: (500) Internal Server Error.

at System.Net.HttpWebRequest.GetResponse()
— End of inner exception stack trace —

The system I’m maintaining has gzip-binary-encoded data over HTTP, so the first error (ProtocolException) is due to the fact that the client expects the server to talk in gzip, but receives something else instead.

The real problem here is that the server is returning an HTTP error (code 500) instead of something else. Upon some investigation, we found that the server has hit an exception. We have some custom behaviors on both the server and client side that ensure that all exceptions are passed to the client, so there really is no reason the client should be getting this generic 500 error. Instead, what the client always receives is a re-throw of the actual exception from the server. The only case where the same exception isn’t received by the client is when the client doesn’t know how to deserialize the exception (for example, when it is a type not known to the client).

So why did we get an HTTP code 500 error?

It turns out that the reason had to do with custom error pages in the IIS server that was hosting this WCF service. We had recently defined custom error handlers on our IIS server for HTTP 403. The official title for 403 is “Forbidden”, but we were actually interested in customizing a more specific variant – 403.6, which is used by IIS when an IP address is denied. (By the way, IIS 6.0′s management UI allows one to define custom error pages for specific sub-codes (such as 403.6) but in IIS 7.5 you cannot set a code for a sub-code (in our case, 403 in general but not specifically for the 403.6 code). I’m not sure if setting specifically for 403.6 would have avoided the problem we encountered.)

What happened was that every time an exception occurred in all ASP.NET applications, IIS decided to show the (arguably) pretty HTML pages instead of passing whatever exception was being thrown. The client was expecting binary, gzipped data containing either the result of the call or an exception (service fault), but instead received un-parsable HTML.

The first thing that struck us as odd was that WCF data is being replaced with a custom error page. That may be by design, but it’s confusing. But what really puzzled us was that we had already removed the custom error page setting from the IIS. We did this using the IIS management UI – by going to the “Error Codes” feature of the website in question (not at the server level) and changing the 403 page from the file it was pointing to, back to the value it had before:

%SystemDrive%\inetpub\custerr\<LANGUAGE-TAG>\403.htm

However, when you do it this way it does not go back to the original behavior.

Here’s what you have in the web.config of that site after setting the custom error page (in this example, we set it to “/test.html”):

<httpErrors>
<remove statusCode=”403″ subStatusCode=”-1″ />
<error statusCode=”403″ prefixLanguageFilePath=”" path=”/test.html” responseMode=”ExecuteURL” />
</httpErrors>

Before we ever touched the custom error settings for our site, this entire section (<httpErrors>) did not exist in our web.config. We then tried to set it back to the default behavior, by clicking on “Insert content from static file” with the “Try to return the error file in the client language” option checked, and setting the root path to “%SystemDrive%\inetpub\custerr\” and the file to “403.htm”. That is exactly what the UI was showing before we ever touched this screen. You’d expect the web.config to go back to its previous configuration, but instead what we find in web.config is:

<httpErrors>
<remove statusCode=”403″ subStatusCode=”-1″ />
<error statusCode=”403″ prefixLanguageFilePath=”%SystemDrive%\inetpub\custerr\” path=”\403.htm” responseMode=”File” />
</httpErrors>

If we look closely at the UI we see that the corresponding error code row has the value “Local” under the “Entry Type” whereas the other rows have “Inherited”:

Bottom line is that now our site is overriding the default behavior and forcing a custom error page for 403. In IIS 6 there was an option to revert to the parent’s behavior, but not in IIS 7.5. The UI provides no way to change the behavior to the inherited value. The only solution is to go into the web.config and revert the <httpErrors> section back to what it was originally – in our case, remove the section completely.

Once we removed the <httpErrors> section completely, IIS reverted back to the old behavior of not using custom error pages, and then our WCF service (hosted in ASP.NET) started sending out exceptions if they occurred, instead of showing HTML error pages. Saga ends.