Archive for the ‘General’ Category

Office client apps in Office 365 plans – are they all the same?

Sunday, February 11th, 2018

 

During discussions with our clients regarding Office 365 plans, I realized that most of the people are not aware of differences in Office client apps that comes with different Office 365 plans. Instead, most people think that all Office 365 plans, that include Office client apps, actually contain same Office client applications (Word, Excel, Outlook,…). For example, Office 365 E3 contains Office client apps as well as Office 365 Business Premium. However, these are not the same (although very similar) Office client apps. This misunderstanding can sometimes lead to wrong licensing decisions and/or missing functionalities. And that’s the reason I’m writing this post.

So, if you decided to license your Office client applications through Office 365 (instead of buying standalone Office package), you should know that we are basically talking about two, very similar but still different, licensing options – Office 365 ProPlus and Office 365 Business. Both can be purchased separately, or as a part of some Office 365 plans.

Office 365 Business is also integrated in Office 365 Business Premium. Similarly, Office 365 ProPlus is integrated in Office 365 Enterprise E3 and E5 license plans.

Although Office 365 ProPlus and Office 365 Business provide very similar functionalities and features, they are not the same product. Users tend to think that Office applications provided in the Office 365 ProPlus package are exactly the same, as the Office applications in Office 365 Business; this is because both versions are installed in a same way and provide almost the same user experience.

Both Office 365 ProPlus and Office 365 Business provide Office client apps such as Word, Excel, PowerPoint, Outlook, Publisher, and Skype for Business. Also, in both versions, you can use these apps on up to five devices per user, and in both versions you have the ability to get Office updates as long as you have valid license.

However, there are number of differences between Office 365 ProPlus and Office 365 Business.

The most important differences are:

• Office 365 Business can be deployed on up to 300 users per organization, while Office 365 ProPlus does not have such a limitation.

• Office 365 ProPlus provides Microsoft Skype for Business application while Office 365 Business does not.

• Office 365 ProPlus allows you to run Office client apps in virtual desktop scenario, while Office 365 Business does not allow this.

• Office 365 ProPlus supports features for archiving and compliance such as Exchange Online and SharePoint Online Archiving and Compliance, while Office 365 Business does not. Besides, you can integrate Office client apps from Office 365 ProPlus with Azure Information Protection. Client applications from Office 365 Business does not fully support integration with Azure Information Protection.

• Office 365 ProPlus supports Group Policy–based deployment configuration. This is not possible with Office 365 Business.

• Office 365 ProPlus provides InfoPath Designer application while Office 365 Business does not. Also, Power Query, Power Pivot and Power View are not supported in Office 365 Business.

As you can see, most of differences are not visible to the end user, but admins should be aware of these differences – especially in medium sized companies that usually go with Office 365 Business Premium as their choice. And yes, this is usually good choice for small and mid-sized companies unless you need some functionality that is available only in Office 365 ProPlus and not in Office 365 Business. For example, if you are running a 100 users company and plan to go with Office 365, you’ll probably think about Office 365 Business Premium. However, if you go this way, and later decide that you want to deploy Azure Information Protection to your clients and have it integrated in Office apps – you will not be able to do that. If you run into this scenario (or already have it) you will need to buy your users Office 365 ProPlus or E3 license.

Out of office – does it make sense anymore?

Wednesday, May 3rd, 2017

 

out_of_office_messageOne of the most common things that almost everyone does is to set out of office (aka. OOF – if you want to know why it is not OOO, see here) message to his/her mailbox account, when leaving the office or workplace for some time.
Some people do it when they plan to be absent for few days or more, some do it even when they are not there for a half of day. And while some try to provide genuine messages with more or less sense of humor (yes, I mean these 404-like messages), most of these messages are typical in a way that 90% of them contain a phrase “limited email or Internet access”. This is mostly used as an excuse for not replying as fast as usual. These days, while I’m on a mini vacation (and out of office), I’m thinking about does this really make sense anymore. Are we really having limited Internet access while we are out of office? Or we have even more Internet access than we usually consume, while on regular activities in the office? It is enough to look people around you, and the answer is pretty clear – being out office, on a vacation or a business trip, actually does not limit our Internet or email access. On the contrary, we mostly doing everything we can to have the Internet access most of the time. Today, it is almost impossible to find a café, restaurant or hotel without Internet access for guests. Some of these facilities care more about their WiFi quality than about their toilets. If you are attending a conference or some business event – free Internet access is mandatory. Yes, sometimes it does not work so fast, and you might not be able to watch HD Youtube videos, but we are talking about email here. You’re not taking your laptop with you? Even if that’s true (which I doubt) how many emails you actually read on your laptop today, not to mention old fashioned desktops? Speaking for myself, I read at least half of my emails on my phone. I also respond to many of them from my phone – when I realized that, I decided that I need a phone with bigger screen.

So, it’s not about being limited with Internet access (and you know it J). Is it about time that we have to answer emails while out of office? Not for me. If I’m on a business trip, I usually have more time for emails than when I’m at my office. My usual working day is full of meetings, and very often, I don’t even open my laptop before afternoon, not mention evenings when I spend time with my family. Actually, when I’m out of office for some business trip, I have more time to read and respond to emails (and I do get a lot of them). I know quite a few people that are faster to respond on email while they are out of office than when they are in the office (whatever that means). Because of this, I’m seriously thinking to start using In-the-office automatic reply for myself. I just need some time to figure out some genuine message that will not make people think I’m crazy.

On vacation, I read emails at least once a day. It is my choice, not saying it’s a right thing to do. It is just that I’ve realized that this is less stressful thing to do, than to have 1000+ unread emails when I’m back. I fully respect those who like to disconnect from email, it just does not work for me.

If it is not about time, and definitely not about having limited Internet access, is it about our willingness to respond to email while out of office? I think it is. Having OOF message on your mailbox is actually a nice way to say “Yes, I’ve probably red your email in about 15 minutes after you sent it, but will not respond for some time because I’m out of office”. If it is an email that requires some action from you, this kind of excuse worth even more J. And because you’re out of office, most people will restrain from calling you on the phone (personally, I dislike phone calls most of the time).
Let me know what you think, in your comments.

***

These days I’m using Office365 Delve more and more. What really looks very interesting there is My Analytics part. If you didn’t try it so far, and you’re using Office365, I strongly recommend that you do. Among other things, Delve will actually tell you how do you manage your time and especially how you manage your emails. You might be surprised when you look at it. And maybe you realize that you need in-the-office automatic replies more than you think.

Mailbox migration between Exchange organizations–content index issue

Friday, April 22nd, 2016

 

When you move mailboxes from one Exchange organization to another, you need to perform several preparation steps, as I wrote before on my blog. However, you can still face several strange issues. One of the issues I have found few days ago was quite strange. When creating migration batch, user object was left in Syncing state for a long time and eventually end up in Failed state. Exploring the migration log discovered this message “Relinquishing job because of large delays due to unfavorable server health or budget limitations”. Not descriptive at all. After some time, I realized that ContentIndexState of the database where source mailbox resides is in Failed state. From that point, things were quite easy to resolve. Run the EMS and execute following cmdlets:

Stop-Service MSExchangeFastSearch
Stop-Service HostControllerService

to stop these two services. After the services are stopped, go to the folder where your mailbox database is stored and delete the Exchange content index catalog for the database (it is in the same folders as database, named as database GUID). After you do it, start the

MSExchangeFastSearch and HostControllerService services by using Start-Service cmdlets. When they are up and running, create and start migration batch again. It will work without issues.

Load Balancing in Exchange Server 2016

Saturday, January 9th, 2016

Similar as when TMG was EOL, and people were asking how should they publish Exchange services now, now we have a similar question with load balancing of Exchange Server, since support for Windows Network Load Balancing is no more. Architectural changes in Exchange Server 2016 which resulted in having only one server role now (now even Client Access Server is integrated with Mailbox Server) makes impossible WNLB for usage. Reason is simple – you can’t use WNLB and Failover Clustering (required by DAG) on the same pair of machines. However, to be honest, even if this is not the case, WNLB is not a solution that I’d recommend for Exchange load balancing to anyone. It’s too old with too many issues for a critical service like Exchange.
So, obviously, we need an external solution for load balancing client access traffic. In most cases, we are actually talking about HTTPS based traffic, as most Exchange traffic (except for SMTP) is using this protocol.
With Exchange 2013 Microsoft announced that it is not required anymore to go with expensive Layer 7 load balancers for Exchange – you can also use cheaper and simpler Layer 4 balancer for client requests. What’s even better, there’s no need for session affinity anymore. This is because client access services now proxy the connection to any Mailbox Server, and it doesn’t really matter where client actually establish the session. This simplifies the requirements for Exchange load balancers even more.
Let’s see what is the real difference actually, and why should you care about this anyway. You might be surprised that main issue here is actually node health check. Luckily, Microsoft provided a very intelligent way to handle this even with cheap (or free) load balancers. As I’m currently writing a course on Exchange 2016, I’ll use some of my draft materials to clarify this.

Load balancers that work on Layer 4 are not aware of the actual traffic content being load balanced. The load balancer forwards the connection based on the IP address and port on which it received the client’s request and it has no knowledge of the target URL or request content. For example, load balancer on the Layer 4 does not recognize if a client is connecting with Outlook on the Web or with Exchange Active Sync as both connections are using the same port (443). Also, load balancers on the Layer 4 are not able to detect the actual functionality of the server node that is included in load balancing pool. For example, Layer 4 load balancer can detect if one of the servers from the pool is completely down, because it does not responds to PING, but it can’t detect if IIS service on that server is working or not. From the client access perspective, if IIS is not working on the server, it is almost the same as the server is done. However, it will not be marked down by Layer 4 load balancer in this case. Some Layer 4 load balancers can provide simple health check by testing availability of a specific virtual directory, such as /owa, but functionality of one virtual directory does not guarantee that others are also working fine.

Load balancers that work on the Layer 7 of OSI model are much more intelligent. Layer 7 load balancer is aware of the type of traffic passing through it. This type of load balancer can inspect the content of the traffic between the clients and the Exchange server. From this inspection, it gets that results and uses this information to make its forwarding decisions. For example, it can route traffic based on the virtual directory to which a client is trying to connect, such as /owa, /ecp or /mapi and it can use a different routing logic, depending on the URL the client is connecting to. When using a Layer 7 load balancer, you can also leverage the capabilities of Exchange Server 2016 Managed Availability feature. This built-in feature of Exchange monitors the critical components and services of Exchange server and based on results it can take actions.

Managed Availability uses Probes, Monitors and Responders as components that work together. These components test, detect, and try to resolve possible problems. Probe component is used first. It tries to gather information or execute a diagnostic tests for a specific Exchange component. After that a Monitor component is used to evaluate the results that Probe provides. Monitor uses the results information to make the decision whether the component is healthy or unhealthy. If a component is unhealthy, a Responder component can take measures to bring that failed component back to a healthy state. This can include service restart, database failover or, in some cases, server reboot.

If a critical server component is healthy, Managed Availability generates a web page named healthcheck.htm. You can find this web page under each virtual directory, for example, /owa/healthcheck.htm or /ecp/healthcheck.htm. If Managed Availability detects that server component is unhealthy, the health check web page is not available and a 403 error is returned. You can use this to point your load balancer to the health check web page for each critical service.

Layer 7 load balancer can use this to detect functionality of critical services, and based on that information decide if it will forward client connections to that node. If the load balancer health check receives a 200 status response from health check web page, then the service or protocol is up and running. If the load balancer receives a 403 status code, then it means that Managed Availability has marked that protocol instance down on the Mailbox server.

Although it might look that load balancer actually performs a simple health check against the server nodes in the pool, health check web page provides an information about workload’s health by taking into account multiple internal health check probes performed by Managed Availability.

It is highly recommended that you configure your load balancer to perform the node health check based on information provided by Managed Availability feature. If you don’t do this, then the load balancer could direct client access requests to a server that Managed Availability has marked unhealthy. At the end, this results in inconsistent management and negative user experience.

Changing certificate on AD FS and DRS

Monday, August 3rd, 2015

 

If you have AD FS with Device registration service (DRS) configured on your Windows Server 2012 R2, you might have experienced troubles if you decided to change the certificate on AD FS server. Although AD FS management console will allow you to change service certificate for AD FS, it will not let you change the SSL certificate, nor it will allow you to assign rights for group managed service account used by DRS to access the private key of the new certificate. As a result, change of AD FS service certificate only through the AD FS console will make your DRS stop working (and your devices incapable to perform Workplace Join). So, if you want to change this certificate, for whatever reason you have, there is a procedure to follow:

1. First, during certificate enrollment process for the new certificate make sure that you assign rights to access the private key. This is not very obvious thing to do, actually. When you start the certificate request procedure on your AD FS server, choose Web Server template, and then enter its properties to configure more settings. On the Subject tab, make sure that you type all names that you need. First, you need the name of your AD FS cluster (or server), for example, adfs.adatum.com. Make sure that this name is not the same like your AD FS server name. Also, you need this same name as SAN (Subject Alternative Name), and also enterpriseregistration and enterpriseenrollment SAN host names (second one is for Windows 10). See example below:

clip_image001

2. Then, go to the Private Key tab, expand Key Permissions and select Use custom permissions check box. Click Set permissions, then Add, and then select Service accounts as object type, and type your group managed service account that you created when you first configured DRS. See example below :

clip_image002

My group managed service account in this example is FsGmsa1 in Adatum.com domain. When you configure this, finish the enrollment of certificate.

Note : Make sure that this service account has SPN set to your ADFS cluster name. You can check that with following command : setspn –l adatum\FsGmsa1$. Result should something like this:

host/adfs.adatum.com
http/adfs.adatum.com

3. When you finish the certificate enrollment, while you still have Certificates console open, double click the new certificate. Go to Details tab, and scroll down to Thumbprint attribute. Copy the thumbprint value to Notepad, and remove spaces between pairs of characters.

4. Now you have to issue two PowerShell commands to setup new certificate to work with AD FS. First command is :

Set-AdfsCertificate -CertificateType Service-Communications -Thumbprint “your_new_certificate_thumbprint″ – this will set your new certificate as AD FS service certificate. This part you can also do by using AD FS Management console.

Second command will change your SSL certificate for AD FS (that’s the one you need for AD FS, and the one you can’t change with console):

Set-AdfsSslCertificate –Thumbprint “your_new_certificate_thumbprint″

When you finish this, you will be good to. Restart your AD FS and DRS services, and they should start successfully.

Refreshing templates and testing Azure RMS

Saturday, January 10th, 2015

If you are using Microsoft Azure RMS service together with Office365, in order to extend functionality of basic RMS in Office365, you probably do it to create customized RMS templates. If you activate Office365 Rights Management, it will let you use only two default templates for content protection, and one template for email protection (Do Not Forward). Some users might be fine with these capabilities, but if you want more like on-premise RMS features, you’ll probably want Azure RMS. When you enable Azure RMS with your Office 365 subscription, it will let you create custom templates for RMS. However, if you are like me, and like to play around and make changes frequently, you might want check to speed up Azure custom RMS templates sync to Office365. Also, it might be useful to check and test your Azure RMS configuration from time to time. Luckily, there are few nice Powershell cmdlets that can be used for this purpose.

First, you need to connect to your Office365 tenant. To first store your credentials in PS variable, issue this cmdlet:

$cred = Get-Credential and press Enter. After this you will be prompted to enter your Office365 admin credentials. When you do it, type this:

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $cred -Authentication Basic –AllowRedirection, and press Enter. This will create a session to your Office365 tenant and also store in in Session variable.

Next, you want to import your session, and you will do it by executing Import-PSSession $Session cmdlet.

To check your RMS configuration, just execute Get-IRMConfiguration cmdlet after you imported a session in previous step. Check if internal and external licensing are enabled and also check for the RMSOnlineKeySharingLocation ( for Europe , it should be: https://sp-rms.eu.aadrm.com/TenantManagement/ServicePartner.svc). Besides this, it is useful to test your RMS configuration by executing Test-IRMConfiguration –RMSOnline cmdlet.

If all test are passed, and you want to force sync of your new/changed/deleted templates from Azure RMS to Exchange Online, here is the cmdlet :

Import-RMSTrustedPublishingDomain -Name "RMS Online – 1" -RefreshTemplates -RMSOnline

Make sure that the name of RMSTrustedPublishingDomain is accurate (if you used default values it will be like in example above).

Exchange publishing after TMG/UAG

Monday, April 28th, 2014

After Microsoft announced that they will not be developing ForeFront Threat Management Gateway (TMG) anymore, and that this product, together with UAG is end-of-life (you can see more about this here), a lot of people I work with were pretty confused. To most of them, words “end-of-life” sounded like “not supported anymore” which is not the fact. However, from technical perspective, most clients I work with were worried about how to publish Exchange server without TMG.

In this post, I will try to cover most common scenarios with Exchange+TMG and ways how to handle these scenarios in near and far future. You’ll see that things are not as black as they might look at first sight.

First, let’s remind ourselves what TMG actually does for Exchange server. TMG 2010, as firewall/proxy/caching solution, is, among other things, capable to provide application level publishing for Exchange Server, instead of doing that just on transport layer. This means that instead of just forwarding (and protecting) HTTPS and SMTP traffic to your Exchange, TMG is actually “aware” that it has Exchange behinds its back. Because of this fact, it can impersonate Exchange server in some scenarios (for example, for OWA), terminate client connections on itself, authenticate users and then connect to Exchange to fetch the content for the user. TMG is using, so called, listener components to securely publish Exchange (and other) services to the Internet. For the Exchange publishing scenario, listener component is, for example, able to generate form based authentication page for OWA, securely authenticate a user, and then establish the connection to the Exchange server CAS. Besides, TMG was also able to securely publish your SMTP or SMTPS to the Internet, as well as to protect that kind of traffic from malware. This kind of publishing was very convenient for a lot of companies. I mostly work with medium to large companies, and quite a lot of them were using TMG to publish Exchange servers to the Internet, because solution is affordable, reliable, easy to deploy and manage and showed up to be very secure. Of course, there some downsides of using TMG for other purposes, but quite often I was seeing dedicated TMG just for Exchange publishing. In much more rare cases, UAG was also used to protect Exchange, but since it is also EOL, situation is actually the same.

The question we have now is what to do with Exchange publishing in post-TMG/UAG era. So, let’s see some most common scenarios you might run into.

Scenario 1: You already have TMG 2010 and Exchange Server 2003/2007/2010 deployed. What are your options? I want to be very clear on this point: Although TMG is end-of-life product, this does not mean that you have to look for another solution immediately. TMG 2010 will still be supported until year 2020, per support lifecycle policy for this product. This gives you quite a lot of time to find another solution, while still using fully supported configuration. So, no reason to panic, there’s a plenty of time left to think about other options. Make sure that you update your TMG server on regular basis, and you’ll be fine.

Scenario 2: You already have TMG 2010 and Exchange Server 2003/2007/2010 deployed, but you are planning an upgrade to Exchange Server 2013, and you know that TMG does not natively support Exchange Server 2013 publishing. What should you do? Yes, it is true that TMG does not support Exchange Server 2013 publishing out-of-the-box, but there’s pretty simple solution to make it work. First, you don’t need to worry about SMTP publishing – it works as before. You just have to direct your SMTP publishing rule to Exchange Edge server (available only in Exchange Server 2013 SP1) or to Client Access Server (if you don’t have Edge, in version 2013, incoming SMTP is handled by CAS server in Exchange 2013). For publishing Exchange web-based services, such as OWA, OA or Active Sync, you’ll have to run new publishing wizard, select the option to publish Exchange 2010 (as you’ll not have 2013 available), and after you create the rule, you will have to do some small fixes in publishing rule you just created. There’s a great post from Greg Taylor, Principal Program Manager from Exchange PG, that covers this, step-by-step, and you can find it here. Basically, in most cases, you’ll just have to adjust the URL for OWA logoff, which has changed in Exchange 2013. However, if you decided to go with latest and greatest technologies, and use MAPI over HTTP functionality (supported only when using Exchange Server 2013 SP1 with Outlook 2013 SP1), you will also have to add /mapi/* virtual directory path to Outlook Anywhere publishing rule, as this folder was not published before by Exchange publishing wizard. This is not mandatory thing to do. Exchange Server 2013 SP1 can also work with MAPIoverRPCoverHTTPS client connections, like Exchange 2010, but if you have Outlook 2013 SP1 clients you can also enable MAPI over HTTP as it brings several performance and stability enhancements. You can see more about this in great article by Tony Redmond here.

Scenario 3: You already have some 3rd party publishing solution in production, for Exchange Server 2003/2007/2010 publishing, and you’re planning to upgrade your Exchange organization to Exchange Server 2013. In this case, most important is to test your current solution with Exchange Server 2013. It might require some modification to the current publishing configuration, but in most cases, you will be good to go with what you have. If your current solution can’t work with Exchange Server 2013, or you want to replace it for any reason, look for the next scenario.

Scenario 4: You are planning to deploy Exchange Server (in any version) but you don’t have a publishing solution or your current solution does not work with version of Exchange you’re about to deploy and publish. In this case, things can be a bit complicated. First, you should be very clear on what you’re publishing. For the Exchange server, in most cases, you will want to publish SMTP (so your Exchange can receive emails from the Internet) and HTTPS-based services such as Outlook Web App, Outlook Anywhere, Exchange ActiveSync and, in some less frequent scenarios, Exchange Web Services. One of the very nice features of TMG is that it is capable to securely publish all these services, while providing attack protection at the same time. But, you are not able to buy TMG anymore. As said before, it is still supported, but not on the market to buy anymore. That leads us to situation where we must find solution to securely publish SMTP and to publish HTTPS based services. If you are looking for hardware (or virtual) appliance to solve this, most likely, you’ll have to look for two solutions instead of one, like with TMG. You will need one solution for publishing of web-based Exchange services, and another solution for securing and publishing the SMTP. In this case, budget becomes a very important factor. So, let’s look a bit deeper into these scenarios.

Scenario 4.1: You have a very limited budget, but you need a solution that works. I hear this all the time J. Well, actually there are solutions for this scenario, which can be almost free. Instead of using TMG (which was, by the way, pretty affordable), you can achieve reverse proxy functionality by using Application Request Routing component for IIS. This is an extension for IIS in Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012, and it’s free. To use quote from Microsoft, IIS Application Request Routing (ARR) 3 enables Web server administrators, hosting providers, and Content Delivery Networks (CDNs) to increase Web application scalability and reliability through rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching (end of quote). Well, we can also use it to load balance and publish Exchange Web services. It is not complicated for deployment, and if you choose to go this way, you can find a very good guide on how to configure it here. You might have noticed that ARR is not supported on Windows Server 2012 R2. This is because 2012 R2 provides another technology for similar purpose – Web Application Proxy. Unlike ARR which is an extension to IIS, WAP is the subcomponent of Remote Access role in Windows Server 2012 R2. It actually replaces AD FS Proxy from previous Windows versions, but it also provides some publishing functionality. WAP is free, as it is included in Windows, but it requires some more administrative time to deploy and configure. It requires that you deploy AD FS on separate machine to make WAP work. With WAP, you can also use AD FS based authentication. If you have Exchange only on-premise, this might not be very attractive option, but for hybrid deployments with Office365, WAP is a great solution. If you decide to give it a try, you can find good step-by-step instructions to set it up here. However, ARR and WAP can’t publish SMTP for your Exchange server, so you will need another solution for that.
For the SMTP, you’ll need to address multiple keypoints. From the aspect of Exchange, it is very important if you are using Edge role or not. If you are, then you can deploy Edge server (or servers) in DMZ, and publish SMTP on the firewall. You can activate anti-spam agents on your Edge server, and stop most of the dirty SMTP traffic before it enters your internal network. However, you still have to deal with antivirus protection. As a part of Microsoft decision to shut down Forefront family of products, Forefront for Exchange server is also not available anymore. You can go with a third party antivirus solution, or you can choose Exchange Online Protection from Microsoft. If you don’t already have some on-premise AV solution I recommend that you seriously consider Exchange Online Protection. Yes, I know, that emails are going through Microsoft servers before they reach you, I’ve heard this argument a hundreds of times, with all variations of NSA-related conspiracy added to the story. I’m not going to elaborate on that, but will just say that if your email will be going through Microsoft servers before it reaches your Exchange, that will certainly be one well-known point in its path. Do you know anything about others?

Scenario 4.2: You have some budget to spend on Exchange publishing solution. Good for you J. In this case, I recommend that you look for some solutions from F5 and Kemp. I’m not actually promoting any of their products, but in Exchange community these are proven to be reliable and secure solutions for Exchange publishing. With some of the products they are offering, you can publish and load balance both HTTPS and SMTP. Also, if you are looking for a good (but not cheap) SMTP AV/AS/gateway solution, you should definitely check on Cisco Ironport Email Security appliances. I’ve seen these working in a production, and I’m pretty amazed by effectiveness they provide.

Feel free to comment on this article. I tried to summarize some solutions, based on my from-field experience so far. I’m not saying that this is all, but I hope it will help some people to go in to the right direction.

My sessions on upcoming conferences

Tuesday, March 25th, 2014

After delivering very successful four CloudOS events in Sarajevo and Belgrade, with great audience, I’m heavily engaged in preparing sessions for spring conference season in EE region. This year, 4 conferences are happening in very short time frame (10 days), all of them are really great and I highly recommend that you visit at least some, if not all, of them.

tarabica It all starts with first Serbian Community-driven conference called Tarabica. It will happen on April 5th in Belgrade, it is fully organized by Serbian community guys and MVPs, and I’m sure that it will be a great event. It happens on Saturday, so you don’t have to ask your boss for permissions to attend :). On this conference, I will deliver a session about managing smart cards with ForeFront Identity Manager 2010 R2. You can see more about this session (and also register for a conference) here.

WD Right after Tarabica conference, one of the largest Microsoft official conference in region starts – Windays14. This year, again in Umag, beautiful place on Croatian coast. Last year it was very nice there – great place, excellent speakers a lot of fun, and I’m sure it will be same or better this year. On this conference, I’ll deliver a session about BYOD technologies in Windows Server 2012 R2 (session details are here) and also a session about how to efficiently build and manage internal PKI (session details are here).

NTK In the same week as Windays (but starting from Wednesday) Slovenian Microsoft NTK conference will happen. This is the conference with longest history in this region (I think this one is 19th), and it always attracts a lot of people. It happens on Bled, beautiful place on the famous Slovenian lake. On NTK conference, I will deliver a session about using Dynamic Access Control in production environments (session details are here), and a session about how to efficiently build and manage internal PKI (session details are here).

MSN And last, but definitely not less important, in the second week of April, fourth Bosnian Microsoft Network will happen. In the same place as last year (Banja Vrucica hotel resort), MSN4.0 will offer even better content and really great speakers from the whole region. Besides teaching there, I was also a member of content team, and I’m really satisfied with what we have done content wise. I’m very interested to see attendees feedback this year. On this MSN4.0 conference I will deliver a session about Windows Server 2012 R2 as a CloudOS (session details are here) and a session about upgrading to Exchange Server 2013 (session details are here). Besides these official sessions, I will also co-deliver one case-study session about Exchange&ADRMS implementation project in BH Telecom, that I was leading, together with a team member from customer side.

As you can see, a lot of presentation work is in front of me in next few weeks. Although preparation for all this sessions is not an easy job, I’m really looking forward to see my friends, colleagues and fellow MVPs from Serbia, Croatia, Slovenia and Bosnia.

See you there!!

CloudOS MVP Roadshow events – here we go again…

Saturday, February 1st, 2014

image

After I delivered many Windows Server Roadshows in Bosnia and Serbia last year, and had a great time with participants, I’m very happy and proud to announce that we are set to go again – this time to cover Windows Server 2012 R2 as a Cloud OS.

Every day, more data is generated on many types of devices. Every day, IT administrators and engineers are dealing with various requests and ways to manage data and devices that host the data. More and more apps and services used and installed on premise, are extending to the cloud. To help IT people to work more efficiently in the era of cloud services and BYOD concept, Microsoft provides the CloudOS – Windows Server 2012 R2.

During two whole day events (totally free for community members), one in Sarajevo (Feb 18th – Logosoft CPLS) and one in Belgrade (Feb 10th – Microsoft Serbia) we will explore the concept of CloudOS, see how to enable people-centric IT and will dive into technical benefits that Windows Server 2012 R2 provides. With lots of examples and demos, you will have a chance to see how to protect data in BYOD concept by leveraging technologies in Windows Server 2012 R2 for data access, device management and data protection. Also, we will discuss new approach to storage management with built-in storage technologies in Windows Server, as well as how to efficiently implement VDI in Microsoft based environments. And last, but definitely not less important, you will be able to see new Hyper-V platform with its new features!

So, if you are interested in these topics, I’m sure we will have a great time exploring them again. Registrations for Serbia and already out, stay tuned for Bosnia – we will send invitations soon.

Lepide File Server Auditor – file servers under surveillance

Thursday, April 11th, 2013

After I wrote last month about Lepide Event Log Manager, this time there’s another interesting software from the same company, intended for surveillance of file servers.
The ability to monitor the changes that occur in the resources that the file servers host is very useful, especially in situations when it comes to critical documents and content. Basic auditing that Windows Server provides through its group policy and object access auditing can provide basic information, but to locate and correctly interpret information can often be time consuming and sometimes problematic.
Therefore, the existence of a dedicated software that is focused on this type of surveillance and monitoring, for many organizations is very useful.

Similar to the Event Log Manager, File Server Auditor shares similar simple and intuitive interface and relatively lightweight configuration. Upon completion of the installation and configuration of this software, which is very simple and has pretty light hardware requirements, it is necessary to add file servers that are being monitored, and to install the agent on them, using the appropriate credentials. After that, it begins the process of real-time monitoring of changes occurring at the server, according to the adjustments that you made in the File Server Auditor console.

Setting console1

File Server Auditor as a central element upon which the monitoring is conducted is using rules to control auditing (Audit Rules). Audit rules are formed from multiple components. It is therefore advisable before forming any rules for auditing, to first configure rule sub-components, except in the case when one wants to leave everything on default values ​​(which means to monitor everything all the time, which perhaps is not always the best option). If you prefer a more detailed approach, it is possible to configure the following elements:

Lists

· Events: At this point you configure the type of events that you want to follow. For example, files that are opened, red, modified, deleted, renamed, and changes in SACL and DACL lists. Similar events can be tracked for folders as well. Default event list includes all supported events, which generally results in a pile of logs, so it is wise to narrow this list for a bit.

· Process: It is possible to configure processes that generate changes to the file server resources. Again, by default they are all selected, or if you are interested in some specific, the choice can be set to specific.

· File Name & File Type: As you would expect, it is possible to filter by file type (which is determined by specifying extensions) or by the name of the file (in which case we can also use wildcards). This can be specified in order to achieve control only over certain files and folders that match your criteria in defined filters.

· Directory: If you follow the resources contained within a particular folder on the file server, in this place you can determine which folder you want to audit. At the same time it is possible to form a list of one or more folders whose contents we want to follow.

· Drive: You can also adjust the letter of the drive on the server that is carried out auditing. Since this can vary from server to server, and other options provide ample opportunities for precise filtering, this can be left at the default value, which includes all the drives. Alternatively, it may be possible to disable the system drives (which is usually the letter marked C) and thus focus only logging to files on other drives.

· Time: The last element (ie the list, as it is called in the console) is an option to define the time range for auditing. Although it is by default set to do the monitoring continuously, it is possible to change and the option to define instance so that auditing is done only at certain intervals.

From these elements you form the Audit policy and finally the Audit Rule, which contains a list of servers that are being monitored, the identity of users who you want to audit (by default all are monitored, but also it can be further configured), and the policy that is formed earlier.

Audit rule

This modular approach to configuration is fairly effective, and once set up the structure easily changes in any of these components. In essence, the configuration components (somewhat awkwardly named list in the user interface) form one audit policy, which is then allocated to the audit rule on the specified server or servers, and the corresponding user (or users).
Users are defined by the User Group option. Here we can create groups of users who we want to associate with the proper policies for auditing. Groups that are formed here are related only to the application itself and are not visible outside. It is especially nice that you can take users directly from Active Directory, and in the same place you can associate audit policy to the new groups, which shortens and eases configuration.
The console settings also allows you to configure alerts, which can be sent via email or SMS, in case of an event that is defined by a query, and it is possible to do a backup (and restore if necessary) the configuration. Given that the full configuration of the software can require quite some time, I advise you to be sure to do a backup.

The second part of the management console is designed for reporting, as a result of what is configured. This part is based on SQL Server reporting, which has to be defined during the software installation. Reports are pretty clear and easy to read, even though the console itself (similar to the one in Event Log Manager) seems a bit archaic. It is interesting that this application layout can be changed by a variety of layouts (eg, Windows XP, Office 2007, Visual Studio, etc), which is not particularly useful, but it’s cute.

Reports console1
Predefined reports provided allow the display of all the changes, the changes that apply only to read (successfully and unsuccessfully), to create files and folders (also successfully and unsuccessfully), and modifications that occur on any resource, as well as modification of the permissions on files and folders ( SACL and DACL). Each report can be further defined with filters such as time, server, users, files, folders, processes, and specific events. In essence, the filter can use any configurable parameter that we discussed earlier. In addition, it is also possible to create custom reports.

Conclusion

LepideAuditor for File Server is a very useful piece of software. It doesn’t take much resources, nor it has complicated configuration. There are few things that should be improved (like terminology in console, and graphical interface) but, what’s most important, it does the work. More information about this product can be found at Lepide portal.