All posts by Amy Manley

moneytree

VMTurbo Part 2 – Chargeback

I finally encountered a company that does actual charge back for a virtual machine based on CPU, RAM and OS. Luckily, VMTurbo was also running in the environment keeping me aware of all of my hosts constraints but I digress. I was challenged with finding out what a VM costs so that a price could be given back to the consumer whether it was the DBA team or the eCommerce team. I was aware that VMTurbo has all the information to do showback, or as others like to call it shameback, but why not use it for actual chargeback? I know that’s not what they advertise as a feature but I was stubborn.  There had to be a better way to to extrapolate the information I needed from the application that already knows my environment!

I admit the spreadsheet took a lot of upfront work and mind numbing calculations, if you need assistance let me know. I had to take into account all hardware costs (MDS port, FI, etc) as well as the software costs (OS, Antivirus, backup software). The charge wasn’t just for an empty VM but possibly a Windows 2012R2 server running SQL and requiring backups running on a UCS environment. Using a SQL query and my cost analysis spread sheet, I can now see how much it is costing to run a VM with actual utilization and how much it costs to provision up front with the requirements requested.

My rudimentary spreadsheet ended up looking like this:
costcalcs

Now that I had my spreadsheet of awesome, I needed to use it with VMTurbo. I first created 2 groups based on operating system so I could separate my Linux VMs from Windows. There is obviously a price difference between running those operating systems.
windowsvm
linuxvms2

Now that I had my groups saved, I had to create a custom report. The newer feature of VMTurbo allows you to input text which in my case was a SQL query.

The query looks at my groups and grabs their statistics. (Full code here)
costhowbackreport

Now that I have my VM Summary for Cost Showback saved I can run it and download to see what my VMs have allocated and what they are actually using.
reportgenerated

Now here comes the fun part. Saving this output from, VMTurbo, I now import the data into my cost analaysis spreadsheet with macro goodness. I can now get a good idea of what a VM is costing me monthly, by what is allocated or by what it is actually using with numbers, for this particular environment. Missions accomplished! Please note, storage was not taken into consideration at this time as it was not a requirement for charge-back

No, that over-provisioned VM you are want to deploy is not free

SQL Query for VMTurbo Report


select
distinct(vminst.display_name),
vmgrps.group_name as OS,
vmvstor.VStorage_Capacity_in_MB,
vmvstor.VStorage_Used_in_MB,
vmvstor.VStorage_Used_Percent,
vmvCPU.Num_of_VCPUs,
vmvCPUUsed.VCPU_Capacity_MHZ,
vmvCPUUsed.VCPU_Avg_MHZ,
vmvCPUUsed.VCPU_Peak_MHZ,
vmvMem.VMem_Capacity_MB,
vmvMem.VMem_Avg_MB,
vmvMem.VMem_Peak_MB,
vmvstor.Date
from
(select
uuid,
FORMAT(capacity,2) as VStorage_Capacity_in_MB,
FORMAT((avg_value*capacity),2) as VStorage_Used_in_MB,
FORMAT((avg_value*100),2) as VStorage_Used_Percent,
snapshot_time as 'Date'
from vm_stats_by_day
where property_type="VStorage" and property_subtype='utilization' and to_days(snapshot_time) >= to_days(date_sub(now(), interval 1 day))
group by uuid)
as vmvstor
join
(select
uuid,
FORMAT(max_value, 0) as 'Num_of_VCPUs',
snapshot_time
from vm_stats_by_day
where property_type="NumVCPUs" and to_days(snapshot_time) >= to_days(date_sub(now(), interval 1 day))
order by uuid, snapshot_time)
as vmvCPU
on vmvCPU.uuid = vmvstor.uuid
join
(select
uuid,
FORMAT(capacity,2) as 'VCPU_Capacity_MHZ',
FORMAT(avg_value,2) as 'VCPU_Avg_MHZ',
FORMAT(max_value,2) as 'VCPU_Peak_MHZ',
snapshot_time
from vm_stats_by_day
where property_type="VCPU" and property_subtype="used" and to_days(snapshot_time) >= to_days(date_sub(now(), interval 1 day))
order by uuid, snapshot_time)
as vmvCPUUsed
on vmvCPUUsed.uuid = vmvstor.uuid
join
(select
uuid,
FORMAT((capacity/1024),2) as 'VMem_Capacity_MB',
FORMAT((avg_value/1024),2) as 'VMem_Avg_MB',
FORMAT((max_value/1024),2) as 'VMem_Peak_MB',
snapshot_time
from vm_stats_by_day
where property_type="VMem" and property_subtype="used" and to_days(snapshot_time) >= to_days(date_sub(now(), interval 1 day))
order by uuid, snapshot_time)
as vmvMem
on vmvMem.uuid = vmvstor.uuid
join
(select
uuid,
display_name as 'display_name'
from vm_instances)
as vminst
on vmvstor.uuid = vminst.uuid
join
(select member_uuid, group_name
from vm_group_members
where internal_name like 'GROUP-USER-%' and group_name like 'Windows %' or group_name like 'linux')
as vmgrps
on vmvCPU.uuid = vmgrps.member_uuid

Solarwinds AppStack with SRM Flavor and More!

I think every engineer or sysadmin wants the one monitoring tool that does it all.  It is elusive and many vendors claim they can do it.  You see in a lot of scenarios, that I have product vAwesome that monitors just my VMs.  Product PLENTYOFIOPS monitors storage.  Product iNEVERSLEEP monitors my network.  Then there’s overlap with SCOM, vCOPs or whatever product the DBA team has chosen.  There are probably monitoring tools in your environment that you aren’t aware of, that are running because teams just want something that works for their particular area of expertise. Eventually, there is so much chatter that people ignore the alerts.

stewie

Mom, mommy, ma, mum, mommy the C drive is full.  The C drive is full. Lois!

 

At Virtualization Field Day 4, Solarwinds demonstrated how many of their monitoring tools now integrate with AppStack, the ‘single pane of glass’ into your environment.

First, everything is based on their Orion platform.  It maintains the schema for the visualization into the UI.  Then there are integration pieces like Network Performance Monitoring (NPM), Server and Application Monitoring (SAM), Web Performance Monitoring (WPM and formerly pingdom). Also, through a Hyper 9 acquisition, they created Virtualization Manager(VMan) and integrated it into Orion. Solarwinds really has so many offerings, I can’t cover them all here.  Once a component is integrated with Orion, it becomes part of a powerful tool set.  There is now a definite SOAP API which I believe is moving to REST invoked with JSON.  There might be some PowerShell and Python down the road.  I’m seeing more and more Python around lately.

The really cool part is seeing these pieces under one tab through App Stack.  Solarwinds openly admits they are not trying to solve every sysadmin’s problem.  They are trying to make it easier to troubleshoot with all of the information in one place.

 

Now on to the exciting news on what’s new and available in AppStack:

 

Server & Application Monitor
The 6.2 version now enables monitoring of server and application performance hosted in IaaS cloud providers such as Amazon EC2, Rackspace, and Microsoft Azure. It  can combine the data it collects along with your on site server and application statistics all with one tool.  It now also features AppInsight for IIS.  Previously, only SQL and Exchange were available for AppInsight.  Most monitoring tools are agentless, using WMI and SNMP to gather data.  However,  SAM does use agents, in order to see your SQL queries, buffer size, etc.

Storage Resource Monitor
Storage Resource Monitor replaces Storage Manager in the Solarwinds offerings.  It features support for dozens of common SAN and NAS arrays and also new NetApp Cluster-mode, as well as NetApp IBM N-series, NetApp E-series family, EMC VNX family and Dell EqualLogic PS Series arrays.  You can now drill down and see if a specific LUN in your RAID group is impacting your application within AppStack.

Virtualization Manager
New in 6.2, Virtualization Manager is now AppStack enabled!  OpenStack and KVM are said to be on the roadmap.  Set baselines for your VMs and determine how much of a variance should be considered an anomaly.  Items such as host health and VM sprawl are now under one view with AppStack.

Web Performance Monitor
Web Performance Monitor 2.2 is also added to the AppStack dashboard.  Transaction health checks, page load speeds all integrated into AppStack!

Here is VMan with the AppStack view alongside it:
vmapp

I can bring up the AppStack view and hit Spotlight to bring up pertinent alerts.
vmapp2

Yellow is not great, Red is bad, you know the drill.  Get more information on AppStack goodness.

My Take
If you already own these products, taking advantage of AppStack is a no-brainer. I think this is a good first approach by SolarWinds to bring so many pieces of a large puzzle under one roof. I look forward to seeing more products integrated into AppStack.  It not only brings monitoring to the sysadmin, but also maps out the underlying dependencies and infrastructure to help bring about swift resolution.

He-Man_MOTU

#VFD4 – VMTurbo, Master of the software-defined universe – Part 1

VMTurbo, was the third presenter, finishing off the first exciting day of Virtulization Field Day 4 in Austin.  It was great to see Eric Wright, also known as @discoposse in his new role as Technology Evangelist.  Actually, I wish he was more a part of the presentation because it was easy to be engaged with his presenting style.  Canadian, Nicholas Cage anyone?

58934682

VMTurbo uses a well known supply and demand economy model in order to ensure application performance and maximize efficiency.  The customer sets a desired state and VMTurbo, using its supply chain model, is proactive in it’s recommendations to migrate, increase/decrease CPU and so forth on a virtual machine ensuring performance before an anomoly can affect your critical VM or application.  Customers see a 20-40% VM density increase because of the data analytics of your workloads.

You can even determine the best place to deploy a new application workloads based on the projected demand and existing application workload demand within your clusters.

deploy
Instead of being a monitoring tool and alerting you that something is wrong in your environment, it presents decisions to mitigate risk. For example, chatty VMs will be placed together to eliminate network hops.  This isn’t an issue, just creating a better opportunity for performance.  You can have a cluster appear balanced in usage but maybe a VM is having high CPU ready times because it is over-sized.  VMTurbo will suggest a right-size for the VM in order to alleviate the ready queue or perhaps recommend another host to be added to the cluster.  This is common in a lot of environments since “more is better” can be a common way of thinking for delivering CPU and Memory to an application.

willy

So back to the economy model of buyers and sellers. Your data-center is represented as a supply chain where your VMs are consumers and infrastructure components are supply.
Applications are grouped in a vPod and the infrastructure (hypervisor, storage, etc) are grouped into a dPod.  If memory is in high demand, the cost goes up to your VMs or grouped vPod.  Cost of transaction is also taken into account.  If the cost will be too high and the gain not worth the cost, recommendations will not be given. This ensures that you don’t have a VM bouncing back and forth in a cluster or going up and down in allocated RAM, CPU, etc.
If you schedule an event to take place, let’s say rightsizing a VM but recent activity shows it would suffer with the change then it will not take place.

To me this sounds like a dynamic resource pool that I don’t have to babysit or script to maintain shares for my critical applications!

Now on to my favorite part, RESTful APIs!  Not only does VMTurbo utilize them to interact with components such as Arista but you can also dig in and get information out of VMTurbo.  Eric ha a great post regarding the awesome that is API.

Part 2 will be – CHARGEBACK!

 

blazer

#VFD4 – Scale Computing – Hyperconvergence for a 4 year old

Yes, that’s what Jason Collier, CTO and co-founder of Scale Computing, said in his presentation at VFD4. A 4-year-old deployed a Windows server virtual machine.

It may sound like an infomercial but Jason is not your typical CTO either.  A cowboy with a Star Wars shirt underneath an infamous blazer he kept kicking his cluster with his boots during the demo at Virtualization Field Day 4.

The focus and vision of Scale Computing is to solve the complexity of modern infrastructure. Through out the on site demo, yes ON SITE (this makes a delegate giddy), simplicity, scalability and high availability were emphasized. A virtual solution right out of the box and intuitive without training the staff.

Honestly, Scale Computing, was the last session of a long technical week but their story was intriguing enough to keep myself and the rest of the delegates engaged. Created in 2007, Scale Computing debuted their product at VMworld of all places. Yes, VMware’s conference, direct competition. Needless to say, I don’t think they’ll be getting an invite back. But they really don’t need to, Scale Computing is focused on the small to medium-sized markets. They have over 1000 customers and one of their larger deployments is of 8 three node clusters. The product is meant for the company with a small IT department or maybe no IT department at all.roy

What separates Scale Computing from the others is they do not run off of a VSA (virtual storage appliance). There is no VM in the data path instead they use QEMU which provides a storage pathing component that directly connects to the KVM kernel. They have created their own distribution based on RHEL and , along with SCRIBE, have eliminated storage protocols.  Scale Computing has also built an orchestration stack to control all of the components.  For now, that means no exposed APIs or CLI commands for the IT admin.  This is to keep the integrity of the product and to fully support the complete stack.

What’s great about using QEMU is you can virtualize anything that runs on a x86 platform. Collier referenced a customer with OS/2 virtualized in their environment.  In a past life, this would have been useful to me!

The solution is plug and play hyperconvergence for the SMBs out there. Rack, stack and plug-in an IP on the console and your first node is up and running. IP your second host and point it to the cluster and now you have added a node to the cluster. The interface is simple as well, the top shows the CPU, RAM and Disk utilization  of your cluster .  Click on disk and it shows you each node and the amount of space used.3nodes

Creating a VM is simple. Hit the plus sign and you get this screen.  Virtual disks are provided by the SCRIBE daemon and performance drivers are Virtio.   This eliminates storage protocols and provides seamless integration into the QEMU/KVM based VM.

createVM

 

 

 

 

CPU, RAM and disk size options depend on the clusters capabilities.  See your cluster options here.  The hardware is Dell and pricing is again simple with premium tech support included for the first year.  Scale Computing is offering tech support as a product too.  24×7, 365 days you’ll get support from a technician in Indianapolis, Indiana. High end support is very important to this company and it shows with a higher customer loyalty rating than Apple (and Apple lovers are crazy!)

Now that you have created the VM, you can boot from an ISO that you have uploaded to the Control Center.
ISOs

Want to migrate a VM? No problem, click on the arrows and click on the host you want it to live migrate/vMotion (maybe they need their own vernacular).amysclone

Here I moved from the first host onto the second host in the cluster.

vmsonserver

Rolling upgrades, non-disruptive host adds, console within a web browser; it’s all just easy.  Clone in seconds thanks to the power of SCRIBE.

Now does something this simple, offer HA and replication? Yes!
This is where reusing your old hardware and current virtualization product could be used. In this type of market, cost is key and many may not be willing to give up their previous virtual environment of choice. Simply, select Remote Clusters and add your cluster. I hope you have noticed that we are going to replicate from the Empire to the Rebellion.
replication.rebellion

A secure SSH tunnel and key exchange takes place for replication.  All of the configuration and metadata is replicated so you can recover your entire VM by cloning from the snapshot.

replication

All VM snapshots are replicated every 5 minutes using asynchronous replication.  After the initial sync, just block differentials get pushed.  There isn’t any fancy dedupe options but if you base your images off of the same clone, only the metadata will be sent to the replication site. VSS support will be available in upcoming releases.

SCRIBE as the backend is essential to all of this.  With QEMU, virtual machines share a memory buffer with SCRIBE.  SCRIBE can schedule I/O using the memory buffer. Metadata gets cached locally in memory but the data is striped across the whole cluster.

The paxos protocol comes into play during a failure scenario as seen below.

failover

A is replicated to the DR site.  B is cloned off of A and has the metadata replicated and so forth for C.  D on the right,is created after the failure.  Once the primary site is online, the system realizes D is based off a snapshot of B so only “2” is replicated back since the original data is existing at the primary site. Note,  it is a manual process to failover and to failback.

The interface is simple but the underlying technology is not.  I could go on but you can learn more about SCRIBE from Storage Field Day 5

Most of us being VMware users almost felt guilty loving this solution.  It fills a gap where consumers may just be entering the virtual space or are ready for the next level but licensing comes into play.  Staff also may not be virtualization experts but with Scale Computing’s solution and support, they can provide a fully scalable virtual solution with DR capabilities to their company.

I would love to hear your thoughts or answer any questions.