Setting up a new Build process using Git in TFS it’s easy but you can face some initial problems, specially if you have been using TFVC.
Problems like this:
Continuous Integration Build of branch-master (MyProduct)
Ran for 0 minutes (Default Controller – name), completed at Thu 04/09/2015 06:20 PM
TF215097: An error occurred while initializing a build for build definition \MyProduct\mybranch-master: Exception Message: One or more errors occurred. (type AggregateException) Exception Stack Trace: at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification) at Microsoft.TeamFoundation.Build.Client.FileContainerHelper.GetFile(TfsTeamProjectCollection projectCollection, String itemPath, Stream outputStream) at Microsoft.TeamFoundation.Build.Client.FileContainerHelper.GetFileAsString(TfsTeamProjectCollection projectCollection, String itemPath) at Microsoft.TeamFoundation.Build.Client.ProcessTemplate.Download(String sourceGetVersion) at Microsoft.TeamFoundation.Build.Hosting.BuildControllerWorkflowManager.PrepareRequestForBuild(WorkflowManagerActivity activity, IBuildDetail build, WorkflowRequest request, IDictionary`2 dataContext) at Microsoft.TeamFoundation.Build.Hosting.BuildWorkflowManager.TryStartWorkflow(WorkflowRequest request, WorkflowManagerActivity activity, BuildWorkflowInstance& workflowInstance, Exception& error, Boolean& syncLockTaken) Inner Exception Details: Exception Message: VS30063: You are not authorized to access https://tfs.instance-domain. (type VssUnauthorizedException) Exception Stack Trace: at Microsoft.VisualStudio.Services.Common.VssHttpMessageHandler.<SendAsync>d__0.MoveNext() — End of stack trace from previous location where exception was thrown — at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult() at Microsoft.VisualStudio.Services.WebApi.VssHttpRetryMessageHandler.<SendAsync>d__1.MoveNext() — End of stack trace from previous location where exception was thrown — at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult() at Microsoft.VisualStudio.Services.WebApi.HttpClientExtensions.<DownloadFileFromTfsAsync>d__2.MoveNext()
Basically you just need to Download the template:
And add the Build Template Process (for Git) to the source code repository branch – click New:
When defining a build in TFS 2013 using the default template for TFVC you can set the Output location for the build.
But when you set it to AsConfigured you have to change the default values of the Test sources spec setting to allow build to find the test libraries in the bin folders. Here’s an example on how to do it.
If the full path to the unit test libraries is:
E:\Builds\7\<TFS Team Project>\<Build Definition>\src\<Unit Test Project>\bin\Release\*test*.dll
So instead, I’ll focus on the problems I faced during the migration.
My first problem was related with the challenges of downgrading from sql server enterprise edition to standard edition, although I was upgrading from SQL Server 2008 to 2012. This information was very helpfull and if you plan to downgrade databases from a sql server edition to another take some time to test the backup and restore process.
Renaming the Report Server was another problem I needed to solve. This post helped me lot and saved me some time. Migrating TFS to a new server, specially when using a non-default instance, can be a problem so be prepared to dedicate some time to it.
Finally, the TFS Reports had a problem (not because Report Server, neither lack of permissions / bad credentials) but because of the data warehouse and the measurements cube. After testing many possibilities the final solution (and the one that really worked) was creating a new database for the data warehouse (Tfs_Warehouse) and triggering the ProcessWarehouse service and ProcessAnalysisDatabase (take a look to TFS 2010 Warehouse & Reporting trouble shooting basics) manually. Note, this process may take a while depending on the size of each TeamProjectCollection.
So, in conclusion, TFS system is built on top of the following databases:
Tfs_<collection> for each Team Project Collection
You must ensure that this databases are not corrupt. The Tfs_Warehouse database and the Tfs_Analysis cube can be rebuilt with some hacks. The same applies for the report server databases. Ensuring that you can migrate this databases with sucess is half way down to make a TFS migration process successfull.
In the next posts I’ll explain a scenario that I had back in the company with TFS 2012 and how I proceed with the migration of infrastructure and upgrade to TFS 2013. I hope it might be a common scenario and that my plan can be of any help.
We had TFS 2012 installed on premises in a single server (Application Tier + Data Tier) integrated with the company’s Active Directory. The company has two geographically separated teams and the single server is with one of them. Because we couldn’t predict the growth of the users and the away team needed access to build server (same server), things started to become less than sufficient and required a migration at the infrastructure level. We also wanted to upgrade to TFS 2013 to benefit from the new features. So we faced (at least) 3 possibilities:
Keep TFS on premises (same server or new server) and upgrade it to TFS 2013
Move TFS to Microsoft Azure and upgrade it to TFS 2013
Move to Visual Studio Online
Soon we concluded that keeping TFS on premises (1.) wouldn’t solve the external access problem and represented less flexibility if we needed future growth in, for example, build servers.
Moving to VS Online was a good option but it represented higher costs and less flexibility in the build definitions, etc.
So option 2. seamed the best solution, we could keep the AD integration and restricted access to servers, but allowing external teams to access build servers. On the other hand, we’ll have higher flexibility, because we could easily scale the AT and DT server in minutes, and so the HD storage.
Taking a look to the System Requirements here’s some important notes:
All of the Operating Systems supported are 64-bits based;
Only SQL Server 2008 R2 and 2012 editions are supported;
Accounts required for installation – Reporting, Team Foundation Server, Team Foundation Build, Team Foundation Server Proxy, SharePoint Products, SQL Server;
The TFS setup will install IIS;
SharePoint Foundation 2010 can be installed manually or as part of the TFS installation, it doesn’t need to be in the same server as TFS, but if not installed in the same server it requires Extensions for Windows SharePoint Services on the server that is runnning SharePoint Products;
SharePoint Server 2010 Standard and Enterprise versions are supported, but Enterprise edition provides access to 5 extra dashboards.
Installation options considered in the exam: Advanced, Application-Tier only, Upgrading TFS from an earlier release of it, Build services installation, Proxy services installation.
Team Foundation Server Proxy does not provide scalability, but can save bandwidth by caching version control files at the remote location.
I’m starting the preparation for the MCSD: Application Lifecycle Management certification. As part of this, I’ll create a new post for each exam component. So, to all of you that may be interested, stay in touch. I hope it will help you.
Martin Thompson is a high-performance and low-latency computing specialist, with experience gained over two decades working with large scale transactional and big-data domains, including automotive, gaming, financial, mobile, and content management. He believes Mechanical Sympathy – applying an understanding of the hardware to the creation of software – is fundamental to delivering elegant, high-performance, solutions.
Here, Martin explains his perspectives on high performance computing (and coding), when to go native versus managed (Can you really write super fast, highly machine-optimized code in Java and .NET? Martin does…). This is a long conversation and well worth your time if performant execution is important to you – yes, the irony of a long chat about highly performant computing doesn’t escape me.