Thursday, August 23, 2012

Updates are currently disallowed on GET requests. To allow updates on a GET, set the 'AllowUnsafeUpdates' property on SPWeb.


 When you work on Central Administration ,


1) Application Management -> Web Applications -> Manage web applications

2) Select a web application -> General Settings. You get the below error :

"Updates are currently disallowed on GET requests.  To allow updates on a GET, set the 'AllowUnsafeUpdates' property on SPWeb."

Fix :

Open Sharepoint 2010 Management powershell

Put the below code in :


$w = get-spwebapplication http://The_Web_App
$w.HttpThrottleSettings
$w.Update()


This should help.

Wednesday, August 15, 2012

Issue while restoring a site backup using stsadm/powershell




I faced an issue today attempting to restore a backup I had taken using stsadm. I had received the following error in stsadm when running the stsadm restore command:


stsadm -o restore -url http://abc/sites/xyz -filename e:\backup.bak -overwrite

I was getting the below error :

Your backup is from a different version of Microsoft SharePoint Foundation and cannot be restored to a server running the current version. The backup file should be restored to a server with version '4.1.10.0' or later.

Then I tried it in powershell:


Restore-SPSite -Identity http://abc/sites/xyz -Path e:\backup.bak

Restore-SPSite : Your backup is from a different version of Microsoft SharePoint Foundation and cannot be restored to a server running the current version. The backup file should be restored to a server with version '4.1.10.0' or later.

Same error :-(

I tried updating with the Cumulative Updates and Service Packs. That didnt help either.

In this scenario, the resolution is to restore the site collection to a new content database.
Using either  stsadm or powershell commands, restore the site collection to a new content database. It worked for me


The Powershell script i used for restoring is given below for your reference:

Restore-SPSite -Identity <Site collection URL> -Path <Backup file> [-DatabaseServer <Database server name>] [-DatabaseName <Content database name>] [-HostHeader <Host header>] [-Force] [-GradualDelete] [-Verbose]


Restore-SPSite -Identity http://abc/sites/xyz -Path -Path e:\backup.bak -DatabaseServer xxx -DatabaseName WSS_Content_xxx


Voila!!! The restore worked now.

Sample SharePoint 2010 SP1/CU(Service Pack/Cumulative Update ) Installation Instructions



1. Stop the World Wide Web Publishing service

2. Install SP1 for SharePoint Foundation 2010

http://www.microsoft.com/download/en/details.aspx?id=26640


3. Install SP1 for SharePoint Foundation 2010 Language Pack (if applicable) *

http://www.microsoft.com/download/en/details.aspx?id=26629


4. Install SP1 for SharePoint Server 2010

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=26623


5. Install SP1 for SharePoint Server 2010 Language Pack (if applicable) *

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=26621


6. Install SharePoint Server 2010 August 2011 CU (hotfix) (requires restart)

http://support.microsoft.com/?kbid=2553048


7. Start the World Wide Web Publishing Service

* Check registry to confirm whether language packs are installed on server:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\InstalledLanguages

SharePoint version after the upgrade is 14.0.6109.5002 (for August CU 2011)

Restart the machine after the installation

Tuesday, August 14, 2012

Sharepoint HTTP Error 503



At times, on browsing to any of the Sharepoint site, the below error comes up

Service Unavailable

--------------------------------------------------------------------------------

HTTP Error 503. The service is unavailable.


This means the Web server (running the Web site) is currently unable to handle the HTTP request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. Some servers in this state may also simply refuse the socket connection, in which case a different error may be generated because the socket creation timed out.


1) Check if World Wide Web Publishing Service is running (check in Services (Type services.msc on Run and press Enter))

2) Go to Run -> inetmgr  -> Enter

Check if the IIS Website is running (the web application where the site is running)

3) Go to IIS -> Check if the application Pools for the web application is running. Else start it. This will fix the issue

Identify the site template used in a SharePoint site




1) Browse to the site

2) Right Click the mouse -> View Source

3) Search for g_wsaSiteTemplateId

4) var g_wsaSiteTemplateId = 'STS#1';

5) STS#1 indicates that it is blank site.You can refer to another one of my posts for getting a reference of site templates. 

Wednesday, August 8, 2012

Create Sharepoint Site Using Site Template ID's

                           
Whenever we create a site in Sharepoint we chose a site template. If the site is being created from Central Administration, you pick up the desired template.If the site collection creation is done from powershell console/stsadm we would have to specify the site template value for the desired site template.


For stsadm(MOSS 2007)


stsadm -o createsite -url <url> -owneremail <email address> [-ownerlogin <DOMAIN\name>] [-ownername <display name>] [-secondaryemail <email address>] [-secondarylogin <DOMAIN\name>] [-secondaryname <display name>] [-lcid <language>] [-sitetemplate <site template>] [-title <site title>] [-description <site description>] [-hostheaderwebapplicationurl <web application url>] [-quota <quota template>]


-sitetemplate STS#number/MPS#number

STS#0: Team Site

STS#1: Blank Site

STS#2: Document Workspace

MPS#0: Basic Meeting Workspace

MPS#1: Blank Meeting Workspace

MPS#2: Decision Meeting Workspace

MPS#3: Social Meeting Workspace

MPS#4: Multipage Meeting Workspace


For powershell(Sharepoint 2010)

$template = Get-SPWebTemplate "STS#0"
New-SPSite -Url "<URL for the new site collection>" -OwnerAlias "<domain\user>" -Template $template

Creating and Managing SharePoint Content Databases

I consider creating and managing content databases a two part excercise. First, design for availability. Second, design for performance. Before I move forward, consider the following 2 points:
1.      Service Level Agreements (SLAs). How long can your site collections be out of service? When designing your site collections it is important to remember that site collections are contained in content databases, and cannot span content databases. Consider the following when designing for site collection availability:
Recovery Time Objective - The recovery time objective (RTO) defines how long your system can be down before it is back online after a disruption. The disruption could be due to anything from a SQL Server outage to a WFE Server failure. The RTO should include data recovery at the server, farm, database, site, list, and item levels.
Recovery Point Objective - The recovery point objective (RPO) defines your data loss threshold, measured in time. If you run daily backups only and ignore the SQL Server transaction logs, then your RPO is 23 hours, 59 minutes, and 59 seconds. Any data written to SharePoint Server 2007 after you ran the backup cannot be restored via native tools until after the next backup. Many organizations assume this risk without fully understanding the impact of losing 24 hours worth of data.
2.      Performance. You don't have to host all content databases on the same disks in your SQL Server. In fact, you don't even have to host them on the same SQL instance! For very large and/or busy site collections, you can host them on very fast disk using RAID 1+0 or 0+1 (depends on your speed vs. availability). You could then host the more generic site collections on less expensive configurations/disks, and assume more risk.

So, how do we begin our design? I would begin by calculating both the A. size and B. performance levels required. If you have a very fast site collection that is mostly read (think WCM / Publishing Site), then you need to optimize the data files on your SQL Server. That means a config like RAID 0+1 for super speed, or RAID 1+0 for more availability and decent speed. Your transaction logs won't matter as much. But, if you are designing for a highly-collaborative environment, then you probably want to optimize your transaction log files.

Microsoft often states that 100GB is the recommended maximum for a content database. This is mainly because small to medium shops may have difficulty maintaining, backing up, and restoring large content databases. But, let's be honest: You can't span content databases, so you are accepting a maximum Site Collection size of 100GB (or less if you count the 2nd stage recycle bin). Really? 100GB doesn't seem very big anymore :) If you need larger site collections, optimize SQL and rock on...
Update: There seems to be an issue with really large databases and large lists. See: http://blogs.msdn.com/toddca/archive/2008/03/23/database-disconnect-issues-with-sharepoint.aspx and http://joeloleson.spaces.live.com/blog/cns!B05AD15E2DE730DD!116.entry seems to still be valid.  I know you want to know more - so do I. Look for a blog coming on this in the near future!
There are some performance hits with large site collections, but these usually aren't that bad. Just be sure to test, test, test. Visual Studio works great for stress testing your site collection performance.
We also need to think through our SLAs when designing content databases/site collections. The two are inseparable and are designed simultaneously. SharePoint Server 2007 can adapt to a multi-tiered SLA arrangement at the site collection/content database levels. If you grouped site collections by their criticality in corresponding content databases, you can then use SQL Server tools to manage them to different support levels. The below picture shows a possible database design for hosting three different SLA levels within a single farm.


If you group site collections similar to what is shown in the above picture, then you can manage them accordingly. Level 1 site collections could have frequent SQL level backups and be mirrored to another SQL instance. Level 2 site collections might be transaction log shipped, and Level 3 site collections might be in simple recovery mode and backed up only once a day. Additionally, every level of content database could be on a different SQL Server instance, and on different disk subsystems. Note that this depicts a one-to-one relationship between a Level 1 site collection and a Level 1 content database. While this provides robust recovery and performance options, it does not scale well. (Microsoft recommends no more than 100 content databases per Web application)

Note: If you have already over-loaded and over-populated your content databases, check out Todd Klindt's blog here to learn how to move them around. (nice post, Todd!) Another best practice is to create multiple content databases to support multiple site collections. A very common mistake we see is customers creating all site collections in a single content database. This usually results from a lack of understanding about how the product should be architected and partially from the process of rapid deployments. All is not lost, however, if you have implemented this way. I feel obligated to explain an almost always misunderstood Central Administration interface. The first picture below shows a screen in Central Administration Application Management Content Databases to take a database offline. The second picture shows the status of the database as stopped after it has been taken offline.




These settings do not mean what they appear to mean. Taking a database offline merely blocks any new site collections from being created in it. It does not take the database offline as one might think. Users can still upload and download content, view Web pages, and process workflows. The content database status will also show as stopped. Once again, this means only that new site collections cannot be created in this database. If you want a one-to-one site collection to content database association, taking the hosting database offline is the best method to accomplish this.


Thursday, August 2, 2012

Sharepoint Search (Index Server, Query Server and Crawl Server)

Updated on Oct 17,2013 - P.S. When I wrote this article, my references are based on SharePoint 2007(MOSS 2007).To a certain extent, this would still hold true to a certain extent for SharePoint 2010.But the SharePoint Search concepts would remain the same.
                 
 I had spent some time to understand the difference between Sharepoint Index Server, Query Server, and Crawl Server and how they are configured. At the end I learned that we can configure this in different ways. But it depends on how we want our farm to perform.Based on the usage of SharePoint, the number of users, and how extensively search is being used, we can have multiple configurations for the same purpose

.i.e.” Searching Content in Sharepoint"....

When we are planning to add a new server (either query or index server) to a farm we get 2 checkboxes:

"Use this server to index content" and
"Use the server for serving search queries"



What would be the difference? Do we check both to make to it a dedicated Index Server/Query Server?

This is a very important distinction, and the decision depends on your preferred architecture and performance.  The index checkbox is definitely a requirement for making it a dedicated index server. This gives it the role of building and storing the index.  The query role does not have to be on your index server.  You can instead use your web front ends (1 or more) as query servers.  This tells the index server to propagate its index to the WFEs that are set as query servers so that they have a local copy of the index. When someone does a search (this is done on the WFE), then that WFE will search itself locally instead of going across the network to query the index server.  This increases speed at the time of query, but it of course introduces additional overhead in terms of having multiple full copies of the index on the network and the network demand of propagating those index copies all the time. 

If you select the query role on your index server, then the index will not get propagated and all searches will query the index server across the network. To set WFEs as query servers, you have to activate the Office Search Service and only select the query checkbox, and then tell it where to store the index.



Another important role is that of crawling that is defined in the SSP settings.  The crawl server (or servers) is the WFE that the indexer uses for crawling content.  A new concept being used is that you can actually make your index server a WFE that isn't part of your normal web browsing rotation (not load balanced) and then set it as the dedicated crawler. This allows the indexer to crawl itself, which does two things: avoid the network traffic of building the index across the network and eliminates the crawling load on the WFEs.  Since your index server becomes an out-of-rotation WFE for regular browsing, you can actually use it to host your Central Admin and SSP web apps, which again reduces load/overhead on the content WFEs.

A query server can't be a query server unless it has the index.  Whenever you add the query role to a server, it asks for a file location, and the index gets propagated to that location on that server.  It's good to have the WFE as a query server so that the searches are fast (queries itself locally) and for some redundancy, since index servers cannot be made redundant.  If the index server goes down, WFEs still have a copy of the index for allowing searches with current info - they just don't get refreshed until the index server comes back online.

If we put query on the index server, then queries have to go from the WFE to the index server and back, which can cause a performance hit, but it's still doable.  We have to decide what is best for our situation.  Just remember that acting as a query server will compete with the very intense indexing process if they're on the same box.

One of the farms I worked in had the below architecture:


5 WFE (4 were load balanced and 1 of the WFE had the index and crawl roles and excluded from the load balanced rotation)
2 App Servers
2 DB Servers (mirrored)
2 Query Servers