Tuesday, December 2, 2014

SSRS tweaks for external facing environment

An external facing (for consumption of non-corporate users e.g. customers, vendors etc.…) SSRS requires some extra tweaking in order to make it functional and compliant with organizational policies.

1. SSRS session expiring even before SharePoint session expires (for SharePoint integrated mode)

SSRS has its own session settings. make sure you have set them at par with SharePoint session settings. I’ve discussed about SharePoint settings in my previous blogs.

image

In case you do not want confusion between SSRS and SharePoint session settings, uncheck the “Use Session Cookies”. If there is a difference between session timeout settings between SharePoint and SSRS, you might face problems while creating subscriptions. Every time the session expires, you might get errors that subscription list doesn’t exist if you are on manage subscription page.

2. Disable “Open with Report Builder”

When SSRS reports are executed they show “Open with Report Builder” option under action button. In case of external facing environment you may not like it, because mostly external users are non-windows users while the report builder tool works only with windows authentication; so its of no use anyways. To remove this option from under the action button run following SQL query on SSRS main database.

1 UPDATE dbo.ConfigurationInfo
2
3 SET value = ‘False’
4
5 WHERE name = ‘EnableReportDesignClientDownload’

Sunday, November 23, 2014

SharePoint External Architecture & Implementation - Part 6

Previous Blog - Part 5
Note:- This is a maxi post.
This blog will deals specifically with Microsoft BI scenario in external. As all roads lead to Rome, all problems lead us to single cause in case of externally facing environments – Authentication.
In case end users of an externally facing Microsoft BI environment are vendors and customers, you do not want them to see each other’s data. However you also don’t want to create separate set of reports for each of these different users. The best, optimized and accepted solution is to provide row level security and handle the authorization of user at the data source level. Different databases have different mechanism to handle row level security. I’m going to talk about how to enable row level security for SSAS (SQL Server Analysis Services) in absence of active directory with SharePoint as front end.

I’ve always wondered what was the reason that Microsoft didn’t allow any other authentication except for Windows authentication in SSAS. It would have provided architects some leg room to try it with non-Microsoft products, nevertheless, they left the window open with “CustomData” implementation as query banding (read here). optional parameters can be passed in connection string with binding it with CustomData. Once this CustomData value is retrieved on the SSAS server, it be used to run logic using DAX (Data Analysis Expressions).

Important thing to note here is that only two solution can be used in an external Microsoft BI environment (assuming its non-windows authentication environment) with row level security; namely SSRS (SQL Server Reporting Services) and PPS (Performance Point Service). Other service such as excel services, Power Pivot, Power View etc.… don’t provide a way to delegate end user’s credentials to the data source.

Now lets look at the implementation of row level security for an Microsoft BI solution with SSAS as data source. The problem has to be divided in two parts.
  1. Send end user’s credentials to data source
  2. Return only those rows where end user has access

Send end user’s credentials to data source

a) SSRS - Use following connection string within SSRS report data source. Make sure its an embedded connection string i.e. connection string passed as a text string within SSRS data source because you can use SSRS inbuilt parameter USERID to pass end user’s identity this way. If you chose to use connection string by specifying it in SharePoint UI, this inbuilt variable will not be available. This is how a connection string would look like.
="Data Source=<SSAS Server Name>:<Port>;Initial Catalog=<Database Name>; Character Encoding=UTF-8; Customdata=" & User!UserID

b) PPS – PPS provides direct way to insert CustomData into the connection string. All you have to do is to click the checkbox “Provide the authenticated user as the value of the connection string property "CustomData"” on the PPS data connection properties page.

Untitled

Because SharePoint keeps the login name in an encoded format (read here) the CustomData that is passed to the SSAS would be in encoded format.

Return only those rows where end user has access

On the SSAS cube, create a role and add member those accounts which were supplied in SSRS data source and PPS unattended service account. Make sure you don’t provide these accounts admin access on SSAS else the whole exercise of row level security would fail and all the users will see whole data.

Because CustomData has been passed via connection string, you can use its value and apply the logic on it. The first logic that you need to apply is to decode the encoded string passed by SharePoint. This can be achieved by using DAX in the role created for the cube access.

Untitled1


Right(right(CustomData(),len(CustomData()) - search("|",CustomData())),len(right(CustomData(),len(CustomData()) - search("|",CustomData())))- search("|",right(CustomData(),len(CustomData()) - search("|",CustomData()))))


Above DAX expression will return you the actual user login ID from encoded value passed from SharePoint. You can use the login ID to run logical operation to return only those rows which match certain values.

Monday, November 17, 2014

SharePoint External Architecture & Implementation - Part 5

Previous Post :- Part 4
One of the major problem for a DMZ environment is that it’s a strictly controlled network enclave. In most of the scenarios you can’t access internet. Your organization might provide proxy as an interface to internet, however it could be a cumbersome process especially if it requires registration of each URL to be accessed.
Why am I talking about access to internet? As it turns out that SharePoint uses its own self created certificate for communication between many of its services. Also if you are using external domain certificates, these certificates will  contact CRLs (certificate revocation list) for certificate validation. Most probably these CRLs are on internet.
Whenever these certificates are invoked, that would be every time somebody connects, authenticates or services need them, these certificates would go online for validation. SharePoint would try to connect to internet to contact CRLs. If a CRL is not accessible, SharePoint would continue to try to access them. This would result in a degraded performance for end users. In fact it could be unacceptable performance. Fortunately there is a setting available in windows that could be used to work around this issue.
Open “Local Security Policy” on all SharePoint servers – type secpol.msc in run prompt.
image
Select “Public Key Policies” from left tree menu and open “Certificate Path Validation Settings” from right pane.
image
Select “Network Retrieval” tab. Check “Define these policy settings” checkbox if its not checked.
Removed check from “Automatically update certificates in the Microsoft Root Certificate Program (recommended)”.
Set 1 second for both “Default URL retrieval timeout” and “Default path validation cumulative retrieval timeout”.
Above setting would cause CRL validation event to timeout in 1 second hence saving a lot of time and performance.

Sunday, November 16, 2014

SharePoint External Architecture & Implementation - Part 4

Previous Blog : SharePoint External Architecture & Implementation - Part 3
In previous blogs I talked about architecture and authentication for an external facing SharePoint environment. However in an enterprise setup, access control would be required to ensure that data is delivered only on need to know basis. Primary issue for an external facing environment is, that it may not have an active directory based user and group store to pick those users and groups for permission assignment. White SharePoint works seamlessly with an active directory, in absence of it a customized solution is required to attach a user and group store for assigning permissions. Because SharePoint uses claims for its internal working, any custom authorization process must provide user and group credentials through claims, hence the name custom claim provider.
Fire command get-spclaimprovider on SharePoint server and you will see a few inbuilt claim providers.
a custom claim provider is a .wsp solution for SharePoint, to be created in Visual Studio. SharePoint provides a few methods, which need to be implemented to attach a custom user and group store. It could be any database such as a SQL Server, file, SharePoint list etc…
Besides providing claims to SharePoint for assigning permissions, custom claim provider can also augment incoming user claims. What it means that you can provide or learn more information about a user is accessing the environment.
There are numerous resources available online to explain how to build a custom claims provider. Some useful that I found are given below.
http://www.codeproject.com/Articles/506023/Understanding-SharePoint-Custom-Claims-Provider
http://blog.podrezo.com/sharepoint-custom-claims-provider/
http://www.titus.com/blog/2012/03/building-a-custom-claim-provider-to-manage-security-clearances/
for Microsoft BI environment, here’s a very useful document.
http://social.technet.microsoft.com/wiki/contents/articles/15274.using-claims-authentication-across-the-microsoft-bi-stack.aspx
Note:- Custom Claim Provider creates issue with SSRS subscriptions if you are planning to use it for BI. I worked with Microsoft to find a solution to this, but even they are unable to pin point the reasons of this problem. For BI environment, you can come up with an alternate approach of creating sharepoing groups through a .wsp solution.

Monday, August 11, 2014

SharePoint External Architecture & Implementation - Part 3

 

Previous Blog : SharePoint External Architecture & Implementation - Part 2

Authentication is a very important factor when building an external environment. One of the most important thing to understand is that, an external platform is usually used by non-corporate users; namely customers and suppliers. There are also applications which are used by corporate employees. For all applications that are accessed by corporate employees, the standard mode of authentication is Active directory (Kerberos). If your application is to be used by corporate employees, very good, because there is already a good authentication mechanism in place. All you have to do is build your application and join it with corporate domain. Well, it’s easier said than done, however there are more challenges to be faced when your application also caters to non-corporate users.

Before we talk about external authentication mechanism, let’s take a look at the authentication mechanism used for any external application.

First, what is Authentication?

“Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be.”

To successfully and accurately determine identity of a user, it necessary to have a reference list, database, password or any such method which can uniquely identity it. Let’s understand different authentication scenarios from two examples.

Example 1:

I book an airline ticket online. On scheduled date, I arrive at the airport for check-in. Security staff asks me for my ticket, and I show a locally printed ticket to them. They are satisfied that it’s a valid ticket with for a travel that would initiate in some time. However, they would still ask me for my identity verification; to ensure that I’m what I claim to be. This identity verification can be done with my driving license or a passport. They would look at the photograph on the identity card and then at me and will let me board the flight only when they think it belongs to me.

Example 2:

I book a ticket for a movie. I arrive at the movie theater to claim my seat. The gate-keeper asks me for my ticket, and I present it to him. I takes a look at the ticket, validates that it’s for the right show and lets me in.

The major difference in above examples is that, in example 1 – even after I claimed that I’ve a valid ticket, I was asked to prove my identity. While in example 2 – my claim is sufficient. Example 1 showcases working of an active directory authentication mechanism. After I pay for my ticket, through password - I’m issued a ticket with a fixed validity period. My operating system uses this ticket to access applications, and applications on their part validate with Active Directory services that the ticket is valid and was indeed issues to me. Example 2 is a typical scenario for a claim based authentication, usually done for external facing application. Because non-corporate users don’t have valid account in corporate directory service, they are issued a ticket through a mutually trusted use store. Example of such user stores are, Microsoft Live, Microsoft Azure, Google+, Facebook, LinkedIn etc...… Once a user is authenticated through the user store, Identity provider issues a time bound token to the user’s browser or client application. This token may have attributes such as login name, email address, country etc...… User presents this token to external application and the application accepts it, NO QUESTIONS ASKED. It doesn’t go back to user store to validate, it simply trusts it.

External facing applications, which are accessed by non-corporate users work on same mechanism as described in example 2. Valid users are issued tokens after they are authenticated. This token is stored on client machines through cookies.

If external users have to be authorized and authenticated, there needs to be a user store/repository through which these users can be validated. Now it completely depends on the organizational preferences. It could be either an external users repository such as Microsoft Azure, Google + or an internal repository such as IBM Tivoli directory service or any other open LDAP director service.

In absence of active directory, SharePoint uses a custom authentication provider, namely a claims Federation agent. Special configurations have to be put in place to configure this Federation agent with SharePoint. This configuration is done with the help of Power Shell. In this post I’ll walk you through the process of configuring a custom Federation agent and how it works.

During the configuration of Federation agent in SharePoint, you will also define the login URL for the SharePoint. When user request hits SharePoint, SharePoint will redirect user to this login URL. This login URL will belong to an authentication mechanism which will take supplied user credentials and validate them against the user store. Once this authentication mechanism validates that the user is a valid one, it will pass this information to the Federation agent. Also this authentication mechanism will pass any required attributes for this user such as username, email address, et cetera. Federation agent on receiving these attributes; will convert them into a claim token and then it sends this claim token to the SharePoint. SharePoint uses claims authentication within the farm to communicate within servers and services.

Let’s understand how to configure an authentication provider for SharePoint.

   1: $root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("path to root certificate")
   2: New-SPTrustedRootAuthority -Name "Root" -Certificate $root
   3:  
   4: $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("path to certificate provided by federation agent")
   5: New-SPTrustedRootAuthority -Name "<authenticatino provider name>" -Certificate $cert
   6:  
   7: $upnClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/UID" -IncomingClaimTypeDisplayName "UID" -SameAsIncoming
   8: $emailClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/EmailAddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
   9: $CNClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/CommonName" -IncomingClaimTypeDisplayName "CommonName" -SameAsIncoming
  10: $FirstNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/FirstName" -IncomingClaimTypeDisplayName "FirstName" -SameAsIncoming
  11: $LastNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/LastName" -IncomingClaimTypeDisplayName "LastName" -SameAsIncoming
  12:  
  13:  
  14: $realm = "urn:sharepoint:<environment_name>"
  15: $signInURL = "<login url, would be given by federation agent>"
  16: $ap = New-SPTrustedIdentityTokenIssuer -Name "<authentication provider name>" -Description "<description>" -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $upnClaimMap,$emailClaimMap,$CNClaimMap,$FirstNameClaimMap,$LastNameClaimMap -SignInUrl $signInURL -IdentifierClaim $upnClaimMap.InputClaimType
  17:  
  18: $sts = Get-SPSecurityTokenServiceConfig
  19: $sts.LogonTokenCacheExpirationWindow = (New-TimeSpan –minutes 1)
  20: $sts.Update()
  21: iisreset

Lets understand each of above code lines.


$root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("path to root certificate")
New-SPTrustedRootAuthority -Name "Root" -Certificate $root
 
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("path to certificate provided by federation agent")
New-SPTrustedRootAuthority -Name "<authenticatino provider name>" -Certificate $cert


Above commands import certificates in SharePoint’s trust store. Federation agent always sends a signature with token to SharePoint. For SharePoint to trust that token, the signature should match with available signatures in its trust store. First certificate that we are importing above in SharePoint belongs to your domain name (trust chain). Next certificate belongs to the federation agent.


$upnClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/UID" -IncomingClaimTypeDisplayName "UID" -SameAsIncoming
$emailClaimMap = New-SPClaimTypeMapping -IncomingClaimType "
http://schemas.xmlsoap.org/claims/EmailAddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$CNClaimMap = New-SPClaimTypeMapping -IncomingClaimType "
http://schemas.xmlsoap.org/claims/CommonName" -IncomingClaimTypeDisplayName "CommonName" -SameAsIncoming
$FirstNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "
http://schemas.xmlsoap.org/claims/FirstName" -IncomingClaimTypeDisplayName "FirstName" -SameAsIncoming
$LastNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "
http://schemas.xmlsoap.org/claims/LastName" -IncomingClaimTypeDisplayName "LastName" -SameAsIncoming


Here we are mapping SharePoint claims with incoming claims in the token sent by federation agent. Total number of claims to be used depends on your requirement. Also the ClaimType format (http://schemas.xmlsoap.org/claims/LastName) depends on your choosing.


$realm = "urn:sharepoint:<environment_name>"
$signInURL = "<login url, would be given by federation agent>"
$ap = New-SPTrustedIdentityTokenIssuer -Name "<authentication provider name>" -Description "<description>" -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $upnClaimMap,$emailClaimMap,$CNClaimMap,$FirstNameClaimMap,$LastNameClaimMap -SignInUrl $signInURL -IdentifierClaim $upnClaimMap.InputClaimType


In above lines, first we are identifying SharePoint as client for federation agent and vice versa. The sing-in URL would be provided by the federation agent. All previous commands were a preparation for the configuration of an authentication provider. With New-SPTrustedIdentityTokenIssuer we are finally implementing it. With this command, we specify all certificates that we have imported, all claims mappings, realm and the sign-in URL. If this final command completed without error, you have setup you custom authentication provider.


To check whether authentication provider is setup run – Get-SPTrustedIdentityTokenIssuer


However there is a great chance that you still can not login to your new site collection created with this authentication provider. You might face the problem where the federation agent sends the token, but SharePoint does not accept it or to put it more accurately – SharePoint authentication goes in a loop. This happens because SharePoint default expiration window for tokens is 10 minutes; whereas most of federation agents create a token with expiration window of just 2 minutes. When SharePoint receives the token from federation agent, it checks the expiration window of the token received. Once it realizes that this token will expire before its own expiry window; it rejects the token and again sends a request to federation agent for new token, and this goes on and on. To fix this problem set SharePoint token expiry window to less than expiry window provided by federation agent. This way SharePoint knows that token is valid for its entire expiry duration. To set the SharePoint token expiry window:


$sts = Get-SPSecurityTokenServiceConfig
$sts.LogonTokenCacheExpirationWindow = (New-TimeSpan –minutes 1)
$sts.Update()
iisreset


With this, you will be able to use your new authentication system.






Saturday, July 5, 2014

SharePoint External Architecture & Implementation - Part 2


In part 1 I talked about architecture for a SharePoint external environment. Lets see how to implement such an environment.

Network/Data Flow
It is important to understand network flow for a SharePoint environment to be able to implement it without hitches.

A nicely compiled list of required network ports.: Click Here

source: http://blogs.msdn.com/b/uksharepoint/archive/2013/01/21/sharepoint-2013-ports-proxies-and-protocols-an-overview-of-farm-communications.aspx
All the mentioned ports are default as used by applications. Change them as per your organizational requirements. If you don't have more than 10,000 users; don't bother setting up separate Distributed Cache servers. You can run Distributed Cache on your WFEs.

If you have decided to host all Farm servers in single network subnet then it shouldn't be much of a problem to allow ports except for non-Farm servers such as SMTP host, external content servers etc...
In case you are hosting WFE and application servers in separate network segments, you will have to make sure all required network ports are open among them.

Some organizations take network security very seriously. They like to keep their DMZ alienated from other corporate networks. I came across a similar situation where WFEs were in DMZ but application servers were to be hosted in HTZ so that app servers can connect to data warehouse in GNZ. Most importantly, a blanket connectivity on specific ports between DMZ and HTZ was not allowed. The only way to connect WFE with application servers was to configure communication through a web service security appliance. This poses unique challenges because SharePoint farm servers communicate directly on specified ports. All internal communication happens using NetBIOS names.


In order to configure Web Service Security Appliance, we must know exact web service endpoints that initiate connection from WFE to App servers. Each of the SharePoint service application have its own set of web services. All these web services are SOAP, but Microsoft uses custom SOAP format. In order to find out exact web service end points for the service application, check application server IIS Manager on server where that service is installed.

Lets take an example of SSRS (SharePoint integrated mode).
Open IIS Manager on SSRS application server. Under "SharePoint Web Services" expand web application for SSRS. In case there are multiple web applications, best way to identify correct web application is to check its application pool. It is a good practice to keep separate service accounts for different services. The application pool configured with SSRS service account is correct app pool; and the web application using that app pool is the web app used for SSRS. Click on web app and check Content View.


These WCF Web services ending with .svc. You will have to configure following web services for SSRS.
  1. AlertingWebService.svc
  2. ReportExecution.svc
  3. ReportingWebService.svc
  4. ReportServiceBackgroundProcessing.svc
  5. ReportStreaming.svc
Because all network communication from WFE to App servers should pass through Web Service Security Appliance, hosts file on WFE should reflect changes for IP - App Server name. Make following changes in hosts file on WFE.
<web service security appliance IP>      <App server NetBIOS name>

If your web service security appliance requires app server name to forward traffic, always use app server NetBIOS name; NO FQDN. In case your web service security appliance validates soap service, its going to stop all traffic as SharePoint web services don't follow standard soap format. Best way to avoid this problem is to configure web service security appliance with REST.

Sunday, June 29, 2014

SharePoint External Architecture & Implementation - Part 1

Designing and Implementing an external architecture for SharePoint farm could be a challenging job especially if not done right. In this series I'll talk about implementing a SharePoint external Farm, with BI stack in mind; i.e. a complete Microsoft BI Stack for external presence.

Below are the ingredients of a typical Microsoft BI Stack.
  1. SharePoint as front end
  2. SQL Server Analysis Services Server for Cube DB
  3. Corporate Data warehouse/data marts for application data
Critical considerations should be given to placement of these services within organizational network. Not all segments of this stack should be hosted externally in DMZ for security’s sake. Careful approach should be taken to expose only enough components which requires external presence.

Let’s understand the process step-wise. Most of the organizations have clearly defined network segments for internal as well external presence. At times organizations have special security zones for sensitive data. In case there is no special zone for sensitive data, it merges with GNZ.


Because SharePoint is going to be the front-end it ends up in DMZ. SharePoint gives us a flexibility to host difference component services on separate servers. Let’s see two scenarios for SharePoint external presence to understand their usage.
  1. SharePoint as a standalone content management system 
  2. SharePoint as front-end of a data driven stack (example: Business Intelligence Application)

SharePoint as a standalone content management system
This is a bare bones scenario where SharePoint is used with its basic functionality to host static content. The SharePoint content has no dependency on any corporate database. This is simple design, and the entire SharePoint farm can reside inside the DMZ, downside being, that it cannot host any sensitive data as it is prone to any malicious attack from internet.

SharePoint as front-end for a data driven stack (example: Business Intelligence Application)
This is where the complexity starts because SharePoint feeds on data hosted in the corporate network. A good and policy driven network architecture would not allow direct connectivity between DMZ and GNZ. To overcome this kind of problem, network architects usually build HTZ. They provide a staging area which sits between DMZ and GNZ. HTZ contains strong firewalling to deflect any external attack. Below picture explains how a Microsoft BI Stack would look like in an external presence.
The architecture looks simple but its very complex to implement. In next post we will see how to implement this architecture.

Next: Click Here

Monday, June 9, 2014

SSAS 2012 Encryption

There are times when you want to (sometime even have to) encrypt TCP communication between different servers or between client and server. These requirements are mostly associated with regulatory compliance such as SOx, HIPPA etc... Other times, they might be required because the data transmitted falls under high security classification for organizations.

A similar case happened with me, when I was required to ensure that TCP communication between excel client and SSAS 2012 is encrypted. White this article applies for server to server communication as well, in my case I was more concerned about client-server communication.

SQL Server support security and encryption mechanism whoever SSAS doesn't support any encryption methodology by itself. However this lacunae is covered by Windows operating system. Depending upon the authentication method selected (NTLM/Kerberos) through SSPI (Security Support Provider Interface), the encryption is handled by operating system.

To ensure that it is encrypted, I tried capturing traffic from client to SSAS through Wireshark. To do so, start wireshark on client and capture live traffic. Once you have connected SSAS through excel client; select client-SSAS conversion. Simply right click on any one item in captured traffic and select conversation filter. Either select IP or TCP as filter.

Once you have all the conversation, click on any item and select Follow TCP Stream. If the data in the stream you see, is indecipherable, then its encrypted. Hence proved.

Reference: