ClosureExterns.NET makes it easier to keep your frontend models in sync with your backend. The output is customizable – you can change several aspects of the generated code. For example you can change the constructor function definition, to support inheritance from some other Javascript function. For more details see ClosureExternOptions.

First, install it. Using **nuget**, install the package ClosureExterns.

Then, expose a method that generates your externs. For example, a console application:

```
public static class Program
{
public static void Main()
{
var types = ClosureExternsGenerator.GetTypesInNamespace(typeof(MyNamespace.MyType));
var output = ClosureExternsGenerator.Generate(types);
Console.Write(output);
}
}
```

You can also customize the generation using a `ClosureExternsOptions`

object.

```
class B
{
public int[] IntArray { get; set; }
}
```

```
var Types = {};
// ClosureExterns.Tests.ClosureExternsGeneratorTest+B
/** @constructor
*/
Types.B = function() {};
/** @type {Array.<number>} */
Types.B.prototype.intArray = null;
```

For a full example see the tests.

Tagged: .NET, c#, Javascript ]]>

I spend too much time in traffic.

So much time, that I start to see things. I see patterns.

You can get stuck in a slow lane because the others are going so fast you never manage to merge. So on average, how much time do you spend stuck in unescapable lanes? Does it make a difference in the cosmic scheme? (No, it doesn’t)

I decided to build a simulator and see how driving styles affect traffic.

Source is on github.]]>

So, I did a quick refactor this week to split it into a library + executable, and pushed it to github (to deafening cries of joy).

First, a non-feature. xml-to-json is “optimized” for many small xml files. If you have many small xml files, you can easily take advantage of multiple cores / cpu’s. You should be aware that for large files (over 10MB of xml data in a single file) something starts to eat up RAM, around 50 times the size of the file.

Other features:

- You can filter xml subtrees to convert, by element name regex (and you can skip the matching tree root if you wish, converting only the child elements and down).
- Output either a top-level json object or json array.
- (Optionally) simplify representation of xml text nodes in attribute-less elements (e.g. “<elem>test</elem>” -> { elem: “test” })

For XML decoding, I’m using hxt (over expat using hxt-expat). I tried a few of the xml packages on hackage, and hxt + expat was the only way I could parse quickly while avoiding nasty memory issues. Apparently, tagsoup can be used with Bytestrings to avoid the same issue but I didn’t try.

JSON is encoded using aeson.

]]>

Rarely, but not rarely enough, I find myself banging my head against some unexpected semantics of the C# language. Sometimes I’m simply misunderstanding the language, other times it’s a limitation, or even a compiler bug.

To document my findings I’ve started repository on github for a list of C# pitfalls, compiler bugs, and various other .NET framework gotchas. These are not “beginner tips” – they are things that may surprise a seasoned developer (or is it just me?).

In any case, all are welcome to join this effort – don’t hesitate to fork & send your pull requests.

Tagged: c# ]]>

My plea to designers: make that clickable thingy **look clickable. **Make that editable input box **look like an input box**, and not just when I hover over it.

As put by Raluca Budiu from Nielsen Group in this article about iOS 7:

Buttons and interface widgets, when present, need to be easily distinguishable from content. They need to have good affordances that invite users to action. In the absence of strong signifiers, they can get ignored, and users may find themselves lost and disoriented.

Not all “flat” design has indistinguishable chrome from content, as pointed out in the above article.

BTW, I did not put the ads. WordPress.com told me: “Occasionally, some of your visitors may see an advertisement here.”

Tagged: design ]]>

Besides being great, it also has a fast release cycle (2.5 weeks) and is open source. It is being developed by folks at Adobe (plus others from the OSS comunity).

**Favorite features:**

- Nice hinting and inline code helpers for CSS.
- Live editing of HTML and CSS – you can fire-up a browser (currently only Chrome) that is updated as soon as you change the code, no need for F5. It also highlights the HTML your cursor is at.
- Very lightweight. Yet, it has everything you need to start working on a web project. You just install it.
- It comes with JSLint built-in that immediately bugs you about your Javascript pitfalls. Other IDEs I’ve used leave it to you to set up that kind of stuff.
- Extensible. The extensions I’ve installed include CSS color hinting, and a context-menu to open a file’s folder in the OS’s file system thingy (explorer, in Windows).

**Weaknesses**: a few basic editor features (such as code folding or more intelligent search) are lacking, but the basics are good enough to be very productive. Extension management is currently done via manual file copying, but according to the dev blogs, they’re working on an extension manager.

**Platform support:** currently built only for Windows and Mac OSX.

**Tip:** if you’re using Live Development on a site that uses AJAX to access it’s own web server, you need to either enable cross-site requests on your development web server, or (easier) tell brackets where your development web server is serving your html/js/css from – it’s under File…Project Settings. See here for more.

]]>

I tried. I installed windows 8 on my personal laptop. It was so cheap – $15 to upgrade – I decided to give it a shot.

What a mistake.

If you do anything at all with your computer, don’t use Windows 8. For a tablet – maybe, I don’t know. For a computer – no way. Just say no.

I’m not a hater or a flamer, I use almost only Microsoft products all day long (Windows 7 and Visual Studio most of my work day). I like Windows 7.

Here’s a very short list of stuff that kills me **(there are many other problems)**:

1. Windows 8 restarts your machine WHILE YOU ARE USING IT to finish installing updates. It warns you but never gives you the chance to prevent it. You could be in the middle of reviewing an emergency patient’s diagnosis, and *BOOM* machine shuts down and is unusable for at least 5 minutes. You have no option to save your work. You have no option to stop this from happening.

2. The “metro UI” or “new start screen” or “tile area” is horribly confusing because it hides everything else. You can’t see what’s running at a glance. You can’t change this behavior.

3. The “charms” bar requires you to point your mouse to a magical location. You can’t change this. You can’t disable this (and make the freakin’ bar always visible)

4. There is no bloody CLOCK or calendar on the default screen, that annoying “start screen”. You can’t add one without installing a third-party “windows store app”.

5. The control panel is split between the “classic” control panel and some super-simplified – yet non-overlapping – set of options in the “charms” settings area. Go figure where to fix something.

Too bad, Microsoft really screwed this one up.

End of rant. Here’s a more scientific approach that analyzes windows 8′s usability issues.

p.s. some search engine food: windows 8 sucks, windows 8 is bad, windows 8 is horrible, terrible, nasty, don’t use windows 8, don’t install windows 8, keep windows 7, windows 7 is better than windows 8. Google, you get the point.

Tagged: Windows 8 ]]>

This problem has been bothering me for a while…and I finally decided to search for a solution.

Tagged: Firefox ]]>

This is the last post in a series dedicated to exploring the problem of “penalty search”. Penalty search involves an ordered array: the first part of the array has values that do not satisfy some property, while the second part – the rest of the values – do satisfy the property. Testing a value for the property carries an index-dependent penalty, given by a cost function. We want to find the first value (lowest index) in the array that satisfies the property. The “total cost” of a search attempt is the total penalty spent looking for the value until it is found.

The goal is to devise an algorithm that searches the array with minimal expected total cost.

Besides being fun to wrestle with, this problem is interesting because it deals with a realistic physical situation: you need to find out the ideal timing for some process. Any time-dependent experiment can be modeled by this problem. Every time you try, you *spend time*, so the cost of the test is time being spent. The goal is to find the solution as quickly as possible, so we want to minimize the expected time spent searching for a solution.

The last post explored how a linear cost function affects the recursive difference equation that calculates the expected cost. Linear cost is interesting because first, it is easier to analyze, and second, because it fits problems where the penalty is the amount of time spent waiting to check a time-sensitive process.

Analytically, the nice property of linear cost is that the size of the array being searched is what determines the cost (up to a constant), not the specific range of indexes. In that last post we took advantage of this translational quasi-invariance of a linear cost function to simplify the problem. After defining a couple of help functions we ended with the following equation:

Where:

- is the size of the region (array) that’s being searched
- represents the expected cost of a region (almost – you need to divide by to get the actual cost)
- is the index that our algorithm decides to test in the current step
- and is the cost of testing at index

For linear cost, we expect the solution – the optimal search algorithm – to be somewhat similar to binary section search: we always pick a constant ratio (“section”) of the current range to pick our next search target. Although I can’t prove that this is a valid assumption, I think it’s interesting enough to see what the ratio should be if we **do** assume constant ratio search.

To begin, we define where (remember we are already assuming that the region is ). To make things simpler (though admittedly less precise) we’ll drop the floor notation and treat as if it were an integer. Substituting in the last equation, we get:

With a little simplification (saving time using Maxima) we get:

Notice that the two recursive references to the function nicely represent testing the left and right sub-regions around the split point . I’m personally not any closer to solving that equation, so here’s where we use approximations.

In the last post we saw that for the value must be zero. We can use this information to solve :

Following the same logic as in that last post, we know that we can only split a region of size 2 into two subregions of size 1. This means must be referencing recursively calls to twice. This limits the range of :

and

Implying:

Here we have a first result: we must split somewhere lower than the middle. *Less than* makes sense if our linear curve is increasing, such that . I didn’t have time to go back and check, but I’m guessing I somewhere made that assumption implicitly along the way.

It also makes sense: if the cost is increasing linearly, the upper bound for a constant split ratio is the middle – which is exactly binary section when cost is constant (not increasing at all, ).

It’s an interesting exercise to plug values for and see where the bounds on go. I’m leaving that for another day, or for you to try.

That’s all. If you find interesting instances of this problem, let me know!

]]>

The previous post – a second in a series discussing optimal search under test penalty – concluded with an open question. We can write an equation to describe the expected cost, which we want to minimize the following by picking the best for each interval :

The open question is: *how?*

is the test cost (penalty) function. Different values have different costs, so we expect that the optimal search will take into account the different costs and be smart about picking which values to test. As we saw previously, for constant cost the algorithm is simply binary section search. We have a recursive function of two variables (the range – ) which we want to minimize by picking yet another function (the one that picks given the current range).

I don’t know how to solve this generally, which is why I resorted to manual calculations and pretty graphs. In that same previous post I used dynamic programming – a natural problem for Haskell – to find the optimal decision tree for a specific range and cost function. Thanks to Control.Parallel it was easy to improve performance by taking advantage of today’s multi-core processors. What I would really like, though, is to actually find a closed-form solution, one that defines the optimal search algorithm.

I’m no mathematician (nor a computer science wizard). If you think there’s a mistake anywhere, let me know.

So what to do when you don’t know how to solve a problem?

Make more assumptions.

^{TM}

Hopefully, we’ll learn something from the solution of the simpler problem.

The first assumption we can make is that of uniform probability (of the search target, in the range). Here’s our equation:

The range is what our algorithm decides to search in the next step, and is the current searched value. We expand the expectation of the next test (next level in the decision tree). For clarity I’ll dump the indexes since we don’t need them now:

Where is a conditional probability. From now on I’ll skip writing the expectation . Using our assumption of uniform probability we can write:

Notice that now our problem is *mostly* a function of range size, so we can define and write:

Multiply by to get:

Being a non-mathematician, here’s where I said “Aha!”. We can make things simpler if we define:

Since $g$ is just $cost$ multiplied by $y-x+1$, minimizing $g$ will also minimize $cost$. Using this definition the equation becomes:

…which is beginning to look a lot more manageable. And remember – so far, the only assumption we’ve made is that of uniform probability. But how to proceed? We still have a recursive equation with two variables, x and y.

Someone must have once said: “one is better than two.” Variables. In a recursive function.

Let’s add another assumption – that the cost function is linear:

Where and are arbitrary constants. Why linear? This cost function has the nice property that the cost of any search tree is **invariant under translation** (of the searched range) up to a constant addition. That is,

or

How do I know this? Here’s an informal proof. A search tree consists of nodes representing the values being tested (and cost being spent). If we translate the underlying range, we’re in effect changing the linear cost function by adding a constant that depends on the amount of translation. So the cost of each node in the search tree will vary by the same constant, leaving the distribution of cost throughout the tree equivalent and keeping the expected cost minimal compared to other trees on the same range.

This property means that given a range size we can find the best search tree for just one translation of that range (say, ) and it will be correct for *any* other range of the same size.

Plugging in the linear cost in the equation from before gives us:

I’m explicitly writing that the choice of depends on , just to make things clearer. We’ll use the indexed notation to signify that depends on the th range, and use it also for the range size, that is .

Now we can use the translation invariance to replace everywhere with 0. At the same time we can get rid of since it will be equivalent to make the function depend on . To that end, define:

Then, the equation above for becomes:

That last term is due to the required translation of the from the origin to (the original value was .

Let’s look at the actual value of the expected cost for a few small N’s (region sizes):

When the size of the region, is 1, there is nothing to check – by assumption the target value is in the region and there is only one cell, which must be it. Since there is nothing to check, there is no cost, so $cost(0,1)=0=h(1)$. Also, we don’t care what the offset (translation) of this one-celled region is – so we don’t need to add the additive term when calculating translated calls such as cost(x,x+1) because the value is always 0.

Let’s move on to . For a region of two cells, the only logical algorithm is to check the first cell. If it yields the value 0, then the target (first post-step cell) is cell #2, otherwise (value at first cell is 1) – the target is cell #1. So the expected cost is just the cost of the first cell – which has index 0 – which is $0a+b=b=cost(0,1)$.

That’s what simple logic dictates. Now let’s see how our equation fairs:

Remember that , so in this case t can only have the value 0 (matches our logic that we can only test cell 0). Since the second recursion () turns out to be a 1-celled region, we delete the additive term and replace it with 0:

Since the result matches our “manual logic” from above.

Here’s where things start to get interesting. is the first case where we have to compare different values for to find out which one yields the minimal expected cost. The possible values are 0 and 1:

So which is less? We should pick iff . Otherwise, pick .

This is interesting because it means that the optimal search algorithm depends on how our linear cost is defined. If the slope is “twice more dominant” than the constant , then we should bias our search towards 0. Otherwise, we go with binary section and take the middle cell with index 1.

We could spend the rest of our lives calculating larger and larger values of N. What we really want, though, is a general, closed-form solution for any N. In my next post I’ll make another weakening assumption that will make it possible.

]]>