This is also known as “SQL Server Backup to URL”.
This blog covers below topics as a reference :
1. Minimum requirement:
2. Process to perform the Backup:
Step 1 : Create Windows Azure Storage Objects:
Create a Windows Azure storage account and then a blob container.
Step 2: Create a SQL Server Credential
Create a Credential to store security information used to access the Windows Azure storage account.
Step 3: Write a Full Database Backup to the Windows Azure Blob Storage Service
Issue a T-SQL statement or via SQL Server management Studio to write a backup of the sample database to the Windows Azure Blob storage service.
Step 4: Perform a Restore From a Full Database Backup
Issue a T-SQL statement or via SQL Server management Studio to restore from the database backup you created in the previous step.
3. Windows Azure Backup tool
In case SQL Server Database is older than 2012 , use Windows Azure Tool to create Rules,
to be continued….
Nutanix is the fastest growing Technology companies of the decade. Headquartered in San Jose – California, Nutanix is valued at more than ~US$5B, with 8000+ customers having worldwide presence in 120+ countries & highest global market share in the Hyper Converged Infrastructure space. Within a span of year and half of sales in India, we have garnered close to 400 customers with leading organisations across all the verticals.
Nutanix technology integrates compute, storage & virtualization technologies into a single x86 based server deployed in scale-out clusters. It reduces power and space consumption, and eliminates complexity. Built-in virtualization & application mobility makes infrastructure truly invisible, shifting focus back to applications. Our technology has helped customers reduce cost (capex & opex) by as much as 40-60% and while driving data-centre efficiency by reducing power, space and cooling requirements by as much as 40-50%.
Please find below Nutanix collaterals and videos:
IDC Independent perspectives from Nutanix customers on how they achieved TCO savings (http://go.nutanix.com/nutanix-pricing-vs-traditional-infrastructure-tco-roi-report.html). Gartner recently acknowledged our technology and vision by placing us as leaders in their Magic Quadrant for HCI (https://www.nutanix.com/go/gartner-magic-quadrant-for-hyperconverged-systems.php).
Thanks for reading!!!
Software Update Management
1st part of the series which triggers using WSUS and SUP site system role,
Description will be updated shortly….
Best practices for creating and using groups
Groups are mainly used for scoping (views, alert notifications, and reports). Therefore, groups are often created based on Windows Computer objects because this class hosts most of the relevant monitors for a Windows application.
If groups are created with extended authoring tools (or directly in XML ), they can and should be based on Windows Computer objects hosting special applications,
Example>
for instance, a Windows Computer group that contains only Windows computers based on a discovered custom special application class.
For notifications, the corresponding Health Service Watcher objects could be added to the group. This is necessary because Health Service Watcher objects needed for Operations Manager self-monitoring alerts like Heartbeat Failures or Computer Not Reachable to be included too.
In addition, groups are useful for creating overrides. Group-based overrides can be much easier to manage than putting overrides on specific instances.
It’s recommended that save groups in the same dedicated, custom, unsealed override pack ,used for application because you can’t reference objects or classes in a different unsealed management pack.
Sealing the group management pack is also possible, but this has disadvantages based on comfort and editing, and sometimes it breaks compatibility. Having all parts of an application together lets you easily maintain the application parts in one management pack without having an influence on other management packs.
set useful naming convention rules for the groups and the management packs.
Example> For instance, a naming convention like GRP_xxx for the group name makes finding groups in the console easy. Custom management packs can have the same name as the base management pack with “– Override” added to the name, so that search for “override” to find override management packs.
For custom own management packs, add a short version of your company name to the beginning of the name of the management pack, for instance, CONTOSO_xxx.
Understanding management packs
MP defines what to
->discover
->monitor
what data should be collected,
how to
->monitor
And define visual elements
-> dashboards
-> views.
Note # Authoring console works with management packs that have a v1.0 XML schema while Operations Manager 2012 has a v2.0 schema.
Terms used at different place but got the same meaning and purpose,
Console Authoring Tools
Target Class
Instance Object
Property Attribute
Note # Instance is considered a representation of a target that shares the same properties (the details) and a common means of being monitored.
Instance is discovered by targeting the parts that make up the application you want to monitor with Operations Manager.
Singleton vs. non-singleton classes
Class represents a single type of objects.
All instances of a class share common set of properties
Singleton class
> automatically created (discovered)
> no discovery rule required.
There can be only one instance of a singleton class.
Example > A group, It has only one instance (the group object itself) and is created during configuration through the Create Group Wizard or automatically or when a management pack is installed.
There is always
> a single instance of a given group, and
> groups are managed by the management servers from the All Management Servers Resource Pool.
A non-singleton class,
> can be managed either by agents or by management servers from any resource pool. > can be any number of instances of a non-singleton class.
Example> The Windows Computer class ,There needs to be as many instances of this class when there are Windows-based computers to be monitored.This class is managed by agents. Each agent installed on a Windows-based computer creates an instance of the Windows Computer class and manages it locally.
Note # Attributes of a class dictate how it is used.Two class types—singleton and non-singleton—dictate how class instances are discovered and whether they are managed by agents or by management servers of a certain resource pool.
Workflow targets
A workflow
>such as a discovery/rule/monitor/override/task
has a certain target defined.
> Target dictates what instances a particular workflow will run on.
Example >if you create a monitor that you need to run only on computers with the Domain Controller role installed,
you select the Domain Controller role as the target for this monitor. By doing so, you ensure that this monitor will run only on domain controllers.
> Target defines which agents the management pack with this monitor is distributed to.
Note# some management packs can have embedded resources like dynamic-link libraries (DLLs) or other kinds of files that are automatically copied to the target as well.
Best practice
> Always choose as specific a class as possible to ensure that the management pack and its workflows are downloaded only on computers where they are really needed.
Example> to monitor something that exists only on a computer running SQL Server, select the SQL Database Engine class instead of a generic class like Windows Computer.
> when you create new monitors or rules, use an existing class instead of creating a new one. This keeps the type space smaller, which is better for performance.
Alternative scenario, to extend monitoring for an entire application that has no management pack available for download, create new classes that specifically describe the application model and how you intend to monitor its various parts. Even though fewer classes is better for keeping the instance space smaller, the classes you create is trivial compared to the number of workflows that run on an agent. The classes you choose also influence how parts are displayed in views, dashboards, and reports.
Authoring classes
> When building a class model for any application, start with an initial, or base, class that needs to get discovered so that afterwards all the higher level classes are discovered based on that.
This ensures that the management pack is downloaded only on the agents where that application exists and also ensures that all the workflows that belong to parts of this application run only on those agents. T
Another option is to target this discovery rule to a more generic class that is a seed discovery class. This ensures that the discovery rule (workflow) that runs to discover the initial class is super lightweight, which is good for performance, and, ideally, runs on a wide interval (for instance, every 24 hours).
> When defining classes, do not use properties that can change frequently. A configuration update is a costly operation if it happens too often and can have a significant impact on performance.
Note # This scenario is called a configuration churn and should be avoided.
Example> to monitor important folders of an application and discover these folders as classes, but do not define folder size as a property because, often, folder size changes frequently, and every time the discovery rule for this folder class runs, there will be a new value for the folder size property. This will cause a re-discovery of that class (to update the properties), and this will cause a configuration update on the agent(s) where this class is hosted.
State change events
The biggest difference between rules and monitors,
monitors
> define a state
>inserts a state change event in the database.
Note # These entries are stored in the StateChangeEvent table in the Operational Database.
This table is used frequently in various queries used to get data from the database. The larger this table is, the slower the console becomes. A monitor with this behavior is too sensitive and most likely is not reflecting the actual state of the part it monitors.
Ideally, such a monitor should be redesigned. If redesign is not possible, the monitor should be tuned. Tuning means changing the way the monitor works via the available overrides.
Note # Even with the state change event storm feature of the management servers, which prevents a new state change from being written to the database if it is part of a storm of changes to the same monitor, state change events still impact performance. Monitors that are very sensitive and generate a lot of state change events are known as flip-flopping or noisy monitors.
Monitor initialization >
when an agent into maintenance mode, each of its monitors generates a state change event, changing from its current state to the Not Monitored state.
In return, when an agent exits maintenance mode, each monitor it uses sends a state change event from the Not Monitored state to the Healthy state.
This happens each time a monitor starts working.
Note # This functionality is crucial to the calculation of the availability of each part being monitored.
However, it generates a significant number of state changes, and, therefore, it is best to avoid implementing scenarios where a large number of agents are put into and pulled out of maintenance mode frequently,
Then a good approach is to reduce the Database Grooming settings for state change event data as much as possible (default 7 days). As a matter of fact, it is a good idea to reduce Database Grooming settings for all data types as much as the business allows and instead rely mostly on historical data that is available in the Data Warehouse Database through dashboards and reports.
Module cookdown
A workflow (monitor, rule, and so on) contains more modules than it needs to function. Cookdown is a feature that saves memory and CPU time by re-using already loaded modules instead of loading and initializing new instances of those modules.
Type space
The type space is the total number of
>management packs, classes, relationships, resources, enumerations, monitor types, views, and other internal definitions that exist in the environment (the Operational Database).
>A copy of the type space is held in memory by each Data Access Service on each management server.
>Each time a new class, workflow, view, and so on is created, modified, or deleted in the console, the Data Access Service of each management server reloads the type space.
Note #The bigger the type space is, the longer it takes to reload. In large environments, this might significantly impact performance on the management servers until the reload is finished.
Best Practice > have more management packs separated by application
and
other criteria, such as one management pack containing the definitions of classes, relationships, and discovery rules
and
a separate management pack containing the monitors, rules, views, and so on,
than to have a very big management pack.
>Import only management packs thats needed.
>Each agent is able to handle many instances, but the impact to performance could be severe if the management group is not able to calculate configuration.
In general, expect an average of 50 to 100 discovered instances hosted by an agent, which results in about 50,000 to 100,000 discovered objects, to be handled by a management group in a 1,000-agent environment.
> Impact that type space size might have when use Windows PowerShell scripts that connect to the Data Access Service to perform different actions,
>custom maintenance
>custom monitoring
>automatic overrides, and so on.
Usually, such scripts consume a large portion of the type space loaded into memory from the Data Access Service, and in some situations, these scripts can load up to almost the entire type space, depending on what the script does.
Example> a rule might connect to the DataAccess Service to get the list of all monitors and then, based on some criteria, take some action either on the monitors, on the objects to which these are tied to, or maybe on the alerts these have generated. In such a scenario, you might end up loading the monitor types, classes, or other parts of the type space into the memory of the associated MonitoringHost.exe instance that is running the Windows PowerShell script. This potentially causes high CPU usage and definitely causes high memory usage of that process.
Authoring groups
Groups are
> singleton classes that are hosted by the All Management Servers Resource Pool.
ie that management of groups is split between the management servers of this resource pool. The members of a group are dynamically calculated by workflows called Group Calculation workflows.
Static groups (groups with explicit membership) are much better for performance than dynamic groups (groups containing dynamic membership calculation rules).
Note# dynamic groups are much more resource intensive when processed.
The more groups you have, and, specifically, the more dynamic groups, the bigger the performance impact is on the management servers of the All Management Servers Resource Pool.
Best Practice > avoid creating new dynamic groups and to instead rely on classes for targeting or other scenarios where the desired functionality can be achieved using different methods.
When dynamic groups are needed—try to use the simplest dynamic membership rules possible.
Group calculation interval
To optimize Operations Manager environment is using and tuning the group calculation interval.
Basically custom groups used for
> scoping user role views and
> dashboards or for filtering notifications or overrides.
Discovery rules for the groups can impact the performance of the environment because the queries create multiple read operations to the Operations Manager database. Adding many dynamic groups with complex criteria to Operations Manager can negatively impact the overall performance.
Note # Group calculations occur every 30 seconds by default.
> Change the group calculation interval in the registry of the management server in the key GroupCalcPollingIntervalMilliseconds.
Sealed management packs
Sealing a management pack changes it from an .xml file to an .mp file, which is a binary representation of the management pack.
> To seal a management pack, the file is digitally signed by the provider and the user knows that it hasn’t been modified since then.
> To upgrade a sealed management pack, the same key must be used or the upgrade will fail.
Note # The sealed or the unsealed version of a management pack can be added to a management group, but never at the same time.
> Sealed management packs have version control when an updated version of the management pack is imported into a management group.
ie If the management pack is sealed, only a newer version of the same management pack can be imported and only if the newer version successfully passes the backward compatibility check.
Note # For unsealed management packs, the new version is always imported regardless of its compatibility and regardless of its version.
> a management pack can reference another management pack only if the management pack that is referenced is sealed.
basically to configure typical parts that are used by other management packs, such as groups or modules, you must seal the management pack.
Summary of best practices
the list of the most important things to consider when working with management packs:
> Class properties choosen should change values as seldom as possible, close to never.
> Don’t use Operations Manager for software inventory (System Center Configuration Manager is built to do that), and don’t collect too many properties.
> Monitors should change their state as seldom as possible. They should not be too sensitive, and the related issue that is described in the alert should be resolved in a more permanent manner.
> The type space should be kept as small as possible. Import or create only what is needed and delete what is not of use.
> Windows PowerShell scripts that connect to the Data Access Service should be kept to a minimum. At least try to develop them in a way that loads as few objects as possible by using selection criteria for the Operations Manager cmdlets.
> Don’t over-use maintenance mode. If there is no way around it, reduce database grooming settings for state change events data.
> Targets for workflows should be as specific as possible. Use seed classes with lightweight discovery rules for custom application monitoring.
> Tune existing workflows using overrides. Disable unneeded workflows, adjust thresholds, set higher run intervals.
Prefer static groups instead of dynamic groups, or at least try to use lightweight criteria for your dynamic groups.
> Change the group calculation interval when there are many groups in the Operations Manager environment.
> Configure before customize. Determine if an existing workflow would be enough instead of creating a new one.
> Classes, groups, modules, and so on should be in a sealed management pack so that they are not unexpectedly modified and so that they can be referenced by content in other management packs.
Thanks for reading!!!
Application Management
@Background
> CI based model #Application Management work on based on CI(configuration Item) model
where below CI type belong to Application,
(fetched using SQL Query against ConfigMgrDB)
Select * from CI_Type
CIType_ID TypeName
10 AppModel
13 GlobalExpression (newly introduced)
21 Deployment Type (newly introduced)
24 Install Policy Type
> SMSProv.Log # whatever the change/new information is submitted at the console. SMSProv is first component to look into.
> SMSDBMon.Log # Keep tabs on this log to watch the change in the database tables provided SQL Tracing is enabled.
SMSDBMon
>wakes respective compenent in the system to initiate the process
> application definition would be replicated to child site using ConfigMgr DB replication
>Database Tables involved #
CI_ConfigurationItems
CI_ConfigurationItemRelation_Flat
CI_Type
>WMI @ ConfigMgr Server
Namespace: root\sms\site_SiteCode
SMS_Application &
SMS_ApplicationTechnology
SMS_ApplicationLatest
SMS_DeploymentType
FlowChart of the Application Management,
Below flowchart depicts overall steps to be taken from creation of an application until deployment of where various components are involved and respective component’s log can be checked to see the processing in detail.
Processing @ Server,
When an application is created at ConfigMgr console(Software Library ->Application Management -> Applications), add CI_Unique ID field which is used in tracking the application deployment.
similiarly add column Content ID under Deployment Type
In the above process, 4 CIs will be created for the above application which can be located in ConfigMgr DB,(Note # CI_UniqueID is similiar from the above screenshot )
Using below SQL Query:
select * from dbo.CI_ConfigurationItems
where CI_UniqueID like ‘%12af18a5-c1ab-4dbf-b816-bae6bf1bb9fc%’
and IsLatest = 1
order by ModelId
CI_UniqueID is appended with /digit which indicate the version, start with version 1 and subsequently increase when some change is done in the application at Console.(when changes are done, it can tracked and noticed via SMSDBMon.Log)
SDMPackageDigest expose the details about respective CI created during the application.
Continue reading….will update shortly