In Silverlight 3, MS added requirements that dialogs be called only in response to a user initiated action. Any attempts to call OpenFileDialog.ShowDialog(), for example, in a Load event would trigger this error:
“Dialogs must be user-initiated”
Simple enough to deal with, and most likely your code only needs to call dialogs in response to click events, but I noticed there is a problem with ChildWindows and dialogs.
Let’s say I have a Page that contains a ChildWindow called fileWindow, and in the fileWindow is an OpenFileDialog and a browse button. Clicking the browse button calls ShowDialog and does what you’d expect. The first time this is done, it all works fine. The problem happens when the fileWindow is closed, and then opened again from the Page. On the second ShowDialog, I will get the above error after I select a file or cancel. It’s a bit odd since you’d expect that error to popup before the ShowDialog executes to prevent it from happening, but in this case it will occur after. No idea what is going on underneath, but it is a bit annoying to workaround. Don’t know if it’s a bug in Silverlight or in my code as of yet.
Continue reading
With the advent of Linq-to-SQL some time ago, I was quite happy with having some sort of standard ORM for .NET, albeit very basic. After using it for some time, and seeing a lot of other people use it, I noticed one big performance gotcha that can get you if you don’t take the time to understand what is going on under the hood, and that is traversing related tables after a query. Assume tables Foo1 and Foo2, both sharing a key FooID. LINQ makes it really easy to fetch related data, either as from the Foo1 entity, like Foo1.Foo2, or from the opposite, Foo2.Foo1. The problem is in a query like this:
using (db = conn)
{
List results = new List();
var a = db.Foo1.Where(p => p.Field = something);
foreach (var temp in a)
{
var result = new { ID = temp.FooID, Field1 = temp.Foo2.Field1 };
results.add(result);
}
return results;
}
People new to LINQ or who aren’t thinking everything through might think that will just be 1 SQL statement, but infact it will be N + 1 statements, where N is the number of rows in Foo1. If a reference is made to a related entity, and that entity has not been filled in, then LINQ will automatically make a SQL call for it. Basically it’s a lazyload, which I’m sure users of hibernate and the like will understand. Often this problem goes unnoticed until it ends up in a QA environment, since you’d never notice any performance issues if the SQL server and the .NET application are running on the same box.
To get around it, you could write set ObjectTrackingEnabled to false and write something to get the related data in a fixed number of queries and then fill in the related tables yourself. Something like:
using (db = conn)
{
db.ObjectTrackingEnabled = false;
List results = new List();
var a = db.Foo1.Where(p => p.Field = something).ToList();
var related = db.Foo2.Where(p => a.Select(q => q.FooID).Contains(p.FooID)).ToDictionary(p => p.FooID); //Probably won't work like this exactly
foreach (var temp in a)
{
var result = new { ID = temp.FooID, related[temp.FooID].Field1 };
results.add(result);
}
return results;
}
Not ideal for sure, but probably better than making way too many calls to the DB server. There are other better ways of handling this case, but I just want get people to think about what LINQ-to-SQL is doing underneath.
Continue reading
Ran into this bug recently. You can find a report on it here. Basically, if you create a custom endpoint extension, like to say support faultexceptions in Silverlight 3, the type attribute of the extension must match EXACTLY the fully qualified assembly name. So for instance if you had something like this:
<extensions>
<behaviorExtensions>
<add name="silverlightFaults"
type="Sample.SilverlightFaultBehavior,
Sample,
Version=1.0.0.0,
Culture=neutral,
PublicKeyToken=null" />
</behaviorExtensions>
</extensions>
You might think that the above would work in any normal case, but in fact it will fail because WCF or some related loader is matching the extension by string, instead of type. Since the type string is formatted like it is with line breaks, it wouldn’t match. So every character counts, and that’s why it must fully match the qualified assembly name. You can get the fully qualified assembly name by calling AssemblyQualifiedName on the type of the class. Here’s what it would have to look like to work in the above example (truncated because it’s too long for the format here):
<extensions>
<behaviorExtensions>
<add name="silverlightFaults"
type="Sample..., Sample..., Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
</behaviorExtensions>
</extensions>
Good news is that this will be fixed in .NET 4.0.
Continue reading
I’ve mentioned about Sharepoint being used for odd purposes, such as a pure data store instead of as a full CMS + delivery. Now I’ve seen another case, where Silverlight was used instead of a standard ASP.Net website. Certainly, there are sites that are 100% Flash and are very effective, but I don’t think any of those sites resemble standard web pages. The best tool to build a website is using HTML or equivalent markup, so why do it in Flash/Silverlight? Taking something like Silverlight and constraining it to the principles of standard web design will end up being the worst of both worlds. Now the UI behaves differently than the user expects, but looks like a webpage so the user is thrown off. You get to deal with everything being an asynchronous call, which can be powerful, but not in this case. All that while reaping none of the benefits. Then there are potential cross site issues, restrictions on what libraries Silverlight can reference, and plenty of other problems. All this leads to is something that does not behave as well as a regular webpage and is not as easy to maintain.
There are things Silverlight and Flash are good at. Shoehorning them into other problem domains (which generally seems all too common in consulting) just feels like a huge waste of time. The problem of building a webpage is pretty well handled at this point, why not try looking into creating sites that use some innovative layouts and interaction so all that time fiddling with XAML doesn’t go to waste?
I think people tend to get too attached to the tools and lose focus on solving the problem.
Continue reading
Something I came across while doing some consulting work. It was a simple problem, calculating how many weeks an employee worked (a week only needs to have one day of work to count). The catch was that a work week starts on Thursday, and there is no guarantee an employee works every week, so there can be gaps. So with that restriction, most developers like myself might be inclined to write a custom program to iteration through the days worked and start tabulating it, perhaps incrementing the count on each Thursday or Friday. There are a few edge cases that can really be annoying to handle with a linear loop, like if an employee works on Wednesday, takes a break and resumes 13 days from then, a Tuesday, how would you count that last Wednesday as a work week? There are a couple others, but suffice to say what would normally be thought of as a really quick and simple program just become a little more complicated than it should be.
A nice little trick is to pregenerate a table with every day in the given time period, let’s say four years or so. Assign each day a week, with each Friday would increment the week by 1. Then there will be a table like this:
1/1/1,1
1/2/1,1
1/3/1,2
1/4/1,2
and so forth. Then you can join this table against all the days worked for an employee, and you can easily see what weeks each day worked belongs to. Then group that up and it’s easy to count the number of weeks. And since it’s simple you can be relatively sure it’s accurate. So anytime you find yourself iterating through some data and doing some weird calculation to do a count, think if there’s a way to approach it with pregenerated data. One other more famous example I know of is for password cracking of the NTLM hashes of old. Someone had pregenerated all combinations of hashes for passwords containing only numbers and letters of a maximum length, making it really quick to figure out what a password is given a hash. And there you have the reason for password complexity rules.
Continue reading
Recently ran into a problem with cookie forwarding and a load balancer that flared up when the Google Search Appliance was tied in. Now this wasn’t a very expensive implementation, lacking a session state tool like ScaleOut Server, so the load balancer was set to sticky session. This normally works fine, except when the GSA was thrown in the mix. The problem was simple, but I somehow we managed to miss it. The client had setup the GSA to index protected content, and set the sample login to a protected url on the server and turned on cookie forwarding for authentication. For those that are not familiar, the GSA handles searches on protected content by forwarding all cookies from the browser that match those that are needed by the sample login url when a user requests a search with protected results. This is done for each search result that is protected and if any return a 302 (or whatever) then the GSA will redirect the user to the website’s login page.
Now all the cookie domains and such were setup right and the .AUTH cookies were going through. Unfortunately, we forgot that the GSA won’t pass along the load balancer cookies (or perhaps this particular load balancer didn’t use cookies), resulting in the cookies being sent to a different server than the user was on, so auth would still fail in the end anyway. What really made it tough was it was a 50/50 chance that it would fail. We haven’t solved the issue for real yet, we’re currently running the load balancer in failover mode, but I imagine we’d have to use some session storage, either the out of the box one or using SQL.
Continue reading
With the recent popularity of the Agile methodology, I’ve seen a lot of people say they do Agile when in fact they do not. Here’s some common mistakes things I’ve seen.
- Just because you have iterations doesn’t mean you’re doing Agile.
I think a lot of PMs love to have iterations so they always have a reason to crack the whip periodically, but fail to realize what the actual purpose of an iteration is. So after an iteration is done, these PMs would not allow time for a refactor or do a retrospective to see how to improve their process and reconcile requirements with the stakeholders. Pretty much it’s “Ok, Iteration X is over, it was hard, now start your Iteration X + 1 tasks”. Even worse is when every iteration they plan out design, implementation, and testing, then it’s like iterative waterfall or something.
- Each iteration should not be treated like the end of a project.
Kind of an addition to #1, but basically the issue here is iterations tend to get delayed or massive overtime is used to hit an iteration deadline early or in the middle of a project. One purpose of iterations is to also help gauge the speed of development, so if tasks aren’t completed as quickly as expected, then move the tasks out and make sure an iteration only has the things that will get done in time to have something ready to show. Dumping overtime in the middle of the project is a surefire way to make the subsequent iterations suffer.
- Daily standups are meant for the developers, not the PMs/leads
Seems like a lot of PMs and leads like the idea of a daily standup because it allows more micromanagement. The purpose of the daily standup is to actually inform other members of the team on what they’re doing and what is causing them problems. This is so other devs can offer help to whom they think they can help the most when they have free time, or immediately in the case of a problem. When the PM drives these it tends to often slide into the “waste of time” end of the scale.
I’m sure I could come up with many more, but then this post would start looking like an Agile guide, and I can’t say I’m qualified to make one of those anyway.
Continue reading
Now I’m not trying to argue that Sharepoint has no purpose, but when I see highly (and I mean highly) customized implementations of Sharepoint, I sometimes wonder if we’ve gone a bit too far trying to make the square peg fit the round hole.
Looking at Sharepoint as a complete product, out of the box it provides several very nice features such as a document repository, news portal, etc. Many clients use it to host internet facing web sites and portals, and it seems to work reasonably well as long as Sharepoint is used as the content management and delivery mechanism. Sometimes, however, I see only a portion of it used, like the content management side just to store some lists and forms, while a completely custom website is built to deliver the content. Now it gets all hokey because Sharepoint doesn’t provide a very good API for accessing this content in a way that would be necessary for a webiste.
So the solution? Have events fire on Sharepoint that would copy the data from a list into another database so it can be served! So now you have this really monster Sharepoint CMS, and all it’s doing is storing some arbitrary lists and managing rights. As a result, now the company has to maintain a set of custom features to just synchronize the data with another DB. Then you have to have consultants to manage these custom features, and tasks that used to be simple like deployment and backup/restore now require a huge list of steps to execute and if anything goes wrong you’ll end up researching the problem on Google because there’s no way anyone other than the Sharepoint product team that could debug any of its error messages.
The real solution then? I’m not sure, but for any large CMS package like Sharepoint, I’d recommend not to use it unless you use it as its designers intended, to manage and deliver content all in one system. You want a system to manage just content and not deliver it? There are simpler products for that, don’t just throw Sharepoint at it because it was free with your MS licenses.
Continue reading