<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=VMworld_2014%2FOther_Notes</id>
	<title>VMworld 2014/Other Notes - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=VMworld_2014%2FOther_Notes"/>
	<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=VMworld_2014/Other_Notes&amp;action=history"/>
	<updated>2026-05-09T17:58:49Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://aznot.com/index.php?title=VMworld_2014/Other_Notes&amp;diff=880&amp;oldid=prev</id>
		<title>Kenneth: /* 8/28/14 (at VMWorld) */</title>
		<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=VMworld_2014/Other_Notes&amp;diff=880&amp;oldid=prev"/>
		<updated>2014-09-05T21:16:42Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;8/28/14 (at VMWorld)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== 8/24/14 (VMWorld in SFO) ==&lt;br /&gt;
&lt;br /&gt;
 easy walk from hotel&lt;br /&gt;
 registered using qr code in email&lt;br /&gt;
&lt;br /&gt;
=== sw defined data center ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    umbrella term for underlying technologies: vcloud management, vsan, nsx&lt;br /&gt;
    reduces overhead of datacenter IT&lt;br /&gt;
    customer reports IT deparment went from 500 people to 39 - runs business on 6 racks&lt;br /&gt;
    vCloud - private cloud&lt;br /&gt;
    this is seizmic - maybe you felt it early this morning&lt;br /&gt;
    vRealize - packages of management components&lt;br /&gt;
        enterprise&lt;br /&gt;
        smb&lt;br /&gt;
        SaaS (sw as a service) - vRealize Air Automation&lt;br /&gt;
    new competencies this year: mgmt automation, sw defined storage, betworking virtualization&lt;br /&gt;
    paas (question)&lt;br /&gt;
    openstack APIs provides access to sddc infrastructure - vmware contributes to this open source project&lt;br /&gt;
        www.openstack.org - Open source software for building private and public clouds.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== hybrid cloud strategy ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    vcloud air is a hybrid strategy&lt;br /&gt;
    vcloud air replaces on-premises services, as needed&lt;br /&gt;
    same mgmt, networking, &amp;amp; security&lt;br /&gt;
    today 6% of workload is in the cloud&lt;br /&gt;
    services&lt;br /&gt;
        devops&lt;br /&gt;
        db as a service&lt;br /&gt;
            Microsoft SQL &amp;amp; MySQL&lt;br /&gt;
        object storage&lt;br /&gt;
            beta using EMC in Sept - GA about EOY&lt;br /&gt;
        mobility services&lt;br /&gt;
        cloud mgmt&lt;br /&gt;
            vRealize Air Automation (formerly vCloud automation center)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== whats new in vCloud suite ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    dc virtualization &amp;amp; standardization&lt;br /&gt;
        vCenter support assistant&lt;br /&gt;
            automatic regular data collection with pattern recognition&lt;br /&gt;
    security controls native to infrastructure&lt;br /&gt;
        vSphere replication improvements&lt;br /&gt;
    HA &amp;amp; Resilient Infrastructure&lt;br /&gt;
        vCenter Site Recovery Manager (SRM)&lt;br /&gt;
            disaster recovery&lt;br /&gt;
            disaster avoidance&lt;br /&gt;
            planned migration&lt;br /&gt;
                can test a proposed migration/upgrade&lt;br /&gt;
            new&lt;br /&gt;
                vCO plugin&lt;br /&gt;
                    APIs are also accessible via power CLI&lt;br /&gt;
                support for more vms&lt;br /&gt;
                faster using batch processing&lt;br /&gt;
                integrated with web UI&lt;br /&gt;
            works within a local NSX environment, not across the entire NSX environment&lt;br /&gt;
            does not support vCloud air, but the cloud will have this type of functionality someday&lt;br /&gt;
    app &amp;amp; infrastructure delivery automation&lt;br /&gt;
        vcac&lt;br /&gt;
            new interfaces with more flexible workflows&lt;br /&gt;
            NSX - control from vSphere&lt;br /&gt;
            puppet integration&lt;br /&gt;
            localization - 10+ languages&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8/25/14 (at VMWorld) ==&lt;br /&gt;
&lt;br /&gt;
=== Storage DRS deep dive ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    stuff is in vShpere 6.0 beta&lt;br /&gt;
    problem&lt;br /&gt;
        shared datastore with 2 different workloads&lt;br /&gt;
        you add a backup, but it uses alot of your bw&lt;br /&gt;
        you want for it to just use enough that it is done on time&lt;br /&gt;
    storage performance controls&lt;br /&gt;
        shares&lt;br /&gt;
            VM is assigned a shares value indicating the relative IOPs load it should get&lt;br /&gt;
        limit&lt;br /&gt;
            max IOPs allowed per VM&lt;br /&gt;
        reservations&lt;br /&gt;
            min IOPs per VM&lt;br /&gt;
    ESX 5.5 IO scheduler (mClock)&lt;br /&gt;
        implements scheduling using the above controls&lt;br /&gt;
        breaks large IOs in 32KB for accounting perposes, so the IOPs controls also control bw&lt;br /&gt;
    storage IO control&lt;br /&gt;
        works across hosts that share a store&lt;br /&gt;
        congestion detection based on latency threshold, causes host to be throttled&lt;br /&gt;
        threshold is a setting&lt;br /&gt;
    sdrs overview&lt;br /&gt;
        gbalancegggs load by moving vdisks between stores in the storage cluster&lt;br /&gt;
        allows vdisks to have affinity for each other, so it one wants to move, the others will also&lt;br /&gt;
    sdrs deployment&lt;br /&gt;
        you have to understand how this works when using complex storage use cases&lt;br /&gt;
            thin&lt;br /&gt;
            dedup&lt;br /&gt;
            auto-tiering&lt;br /&gt;
    sdrs monitors replications&lt;br /&gt;
    storage io control best practices&lt;br /&gt;
        don&amp;#039;t mix vsphere luns and non-vsphere luns&lt;br /&gt;
        hosts io queue size set highest allowed&lt;br /&gt;
        set congestion threshold conservatively high&lt;br /&gt;
    ds cluster best practices&lt;br /&gt;
        similar ds performance&lt;br /&gt;
        similar capacities&lt;br /&gt;
    ds &amp;amp; host connectivity&lt;br /&gt;
        allow max possible connectivity&lt;br /&gt;
    vSphere storage poicy based management&lt;br /&gt;
        now works with different profiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== How VMware virtual volumes (VVols) will provide shared storage with x-ray vision ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    challenges in external storage architectures&lt;br /&gt;
    hypervisor can help&lt;br /&gt;
    knows the needs of the apps in real time&lt;br /&gt;
    global view of infrastructure&lt;br /&gt;
    SDS &amp;amp; VVols&lt;br /&gt;
        policy-driven control plane&lt;br /&gt;
        virtual data plane&lt;br /&gt;
            virtual data services&lt;br /&gt;
            virtual datastores&lt;br /&gt;
                VASA provider is a new player - agent for the array, ESX manages the array via vasa APIs&lt;br /&gt;
                arrays are logically partitioned into Storage Containers&lt;br /&gt;
                vm disks called virtual volumes are created natively on the Storage Containers&lt;br /&gt;
                IO from ESX to array is throug an access point calle protocol Endpoint (PE), so data path is essentially unchanged&lt;br /&gt;
                advertised data services are offloaded to the array&lt;br /&gt;
                managed through policies - no need to do LUN management&lt;br /&gt;
    HP 3PAR and VMware&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Understanding virtualized memory management performance ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    concerns&lt;br /&gt;
        vms configured memory size&lt;br /&gt;
            too small -&amp;gt; low performance&lt;br /&gt;
            too large -&amp;gt; high overhead&lt;br /&gt;
        #vms/host&lt;br /&gt;
            too many -&amp;gt; low performance&lt;br /&gt;
            too few -&amp;gt; wastes host memory&lt;br /&gt;
        memory reclamation method&lt;br /&gt;
            proper -&amp;gt; minimal performance impact&lt;br /&gt;
    layered mem mgmt (app, vm, host)&lt;br /&gt;
        each layer assumes it owns all configured memory&lt;br /&gt;
        each layer improves mem utilization by using free memory for optimizations&lt;br /&gt;
        cross-layer knowledge is limited&lt;br /&gt;
    memory undercommit&lt;br /&gt;
        sum of all vm memory size &amp;lt;= host memory&lt;br /&gt;
        no reclamation&lt;br /&gt;
    memory overcommit&lt;br /&gt;
        sum &amp;gt; host memory&lt;br /&gt;
        ESX may map only a subset of VM memory (reclaims the rest)&lt;br /&gt;
    memory entitlement &amp;amp; reclamation&lt;br /&gt;
        compute memory entitlement for each VM &amp;amp; reclaim if &amp;lt; consumed&lt;br /&gt;
        based on reservation, limit, shares, memory demand&lt;br /&gt;
        ESX classifies memory as active &amp;amp; idle&lt;br /&gt;
        sample each page each minute &amp;amp; see which were used&lt;br /&gt;
    entitlement parameters&lt;br /&gt;
        configured memory size (what guest sees)&lt;br /&gt;
        reservation (min)&lt;br /&gt;
        limit (max)&lt;br /&gt;
        shares (relative priority for the VM)&lt;br /&gt;
        idle memory&lt;br /&gt;
    reclamation techniques&lt;br /&gt;
        transparent page sharing - remove duplicate 4K pages in background&lt;br /&gt;
            uses content hash&lt;br /&gt;
        ballooning - pushes memory pressure from ESX into VM - used when host free memory &amp;gt; 4% of ESX memory&lt;br /&gt;
            allocates pinned memory from guest&lt;br /&gt;
            now that we know the guest can&amp;#039;t use that memory it is reclaimed and given to another VMa&lt;br /&gt;
            possible side effect: cause paging in guest&lt;br /&gt;
        swapping &amp;amp; compression&lt;br /&gt;
            if ballooning runs out of memory&lt;br /&gt;
            randomly chooses a page to compress/swap - use swap if compression savings &amp;lt; 50%&lt;br /&gt;
    best practices&lt;br /&gt;
        performance goals&lt;br /&gt;
            handle burst memory pressure well&lt;br /&gt;
            constant memory pressure should be handleled by DRS/vMotion, etc&lt;br /&gt;
        monitoring tools&lt;br /&gt;
            vCenter perforace chart, esxtop, memstats&lt;br /&gt;
                host level&lt;br /&gt;
                use when isolating problem&lt;br /&gt;
            cVenter OPeraions (vCOps)&lt;br /&gt;
                monitor cluster/dc&lt;br /&gt;
                determine if yuou have a problem&lt;br /&gt;
        guard against active memory reclamation&lt;br /&gt;
            vm mem size &amp;gt; highest demand during peak loads&lt;br /&gt;
            if necessary, setting reservation above guest demand&lt;br /&gt;
            use stats from vCOps manager gui&lt;br /&gt;
        page sharing &amp;amp; large page&lt;br /&gt;
            memory saving from page sharing good for homogeneous vms&lt;br /&gt;
            intra- &amp;amp; inter-vm sharing&lt;br /&gt;
            what prevents sharing&lt;br /&gt;
                guest has ASLR (address space layout randomization)&lt;br /&gt;
                guest has super fetching (proactive caching)&lt;br /&gt;
                host has large page because ESXi does not share large pages&lt;br /&gt;
            why large page&lt;br /&gt;
                fewer tlb misses&lt;br /&gt;
                faster page table look up time&lt;br /&gt;
        impact on memory overcommitment&lt;br /&gt;
            sharing borken when any small page is ballooned or swapped&lt;br /&gt;
        best practices&lt;br /&gt;
            don&amp;#039;t disable page sharing&lt;br /&gt;
            don&amp;#039;t disable host large page, except with VDI&lt;br /&gt;
            install vmware tools &amp;amp; enable ballooning&lt;br /&gt;
            provide sufficient swap space in guest&lt;br /&gt;
            place guest swap file/partition on separate vdisk&lt;br /&gt;
            don&amp;#039;t disable memory compression&lt;br /&gt;
            host cache is nice to have - maybe 20% of ssd - more is potentially wasteful&lt;br /&gt;
        optimizations of host swapping&lt;br /&gt;
            sharing before swap&lt;br /&gt;
            compressing before swap&lt;br /&gt;
            swap to host cache ssd&lt;br /&gt;
    memory overcomittment guidance&lt;br /&gt;
        comfigured&lt;br /&gt;
            sum mem all vm mem / host mem size&lt;br /&gt;
            keep &amp;gt; 1&lt;br /&gt;
        active&lt;br /&gt;
            sum mem active vm mem / host mem size&lt;br /&gt;
            keep &amp;lt; 1&lt;br /&gt;
        use vCenter Operasiont sto track avg &amp;amp; max mem demand&lt;br /&gt;
        monitor performace counters&lt;br /&gt;
            mwm.consumed does not mean anything&lt;br /&gt;
            reclamation counters (mem.balloon, swapUsed, compressed, shared) - non-0 values does not mean there is a problem&lt;br /&gt;
                it just means these things have done their job somewhere in the past&lt;br /&gt;
            mem.swapInRate constant non-0 means problem&lt;br /&gt;
            mem.latency - estimates the perf impact due to compression/swapping&lt;br /&gt;
            mem.active - if low, reclaimed memory is not a problem&lt;br /&gt;
            virtDisk.readRate writeRate&lt;br /&gt;
                large means more swapping is happening&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IO Filtering ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    allow filters to process a vms io to its vmdks. inside esx, outside vm.&lt;br /&gt;
    allow 3rd party data services&lt;br /&gt;
    VAIO&lt;br /&gt;
    filters running in userspace&lt;br /&gt;
    allows for out-of-band releases - isolates filters from kernel&lt;br /&gt;
    extremely performant - ~1us latency for filters framework&lt;br /&gt;
        the ESX kernel was modified to allow a usermode driver like this to be extremely performant&lt;br /&gt;
    general purpose API - raw IO stream&lt;br /&gt;
    limit v1 SDK to 2 use cases (for test considerations)&lt;br /&gt;
        cache&lt;br /&gt;
        replication&lt;br /&gt;
    only on vSCSI devices&lt;br /&gt;
        vSCSI turns T10 cmds into ioctls - find out more about this (?)&lt;br /&gt;
    services&lt;br /&gt;
        high performance event queue access&lt;br /&gt;
        tight integration with vSphere&lt;br /&gt;
        full access to guest ios - synch access&lt;br /&gt;
        automated deployment&lt;br /&gt;
        flexible - requires user to add vC extensions to manage&lt;br /&gt;
    design&lt;br /&gt;
        filter driver registers with VAIO&lt;br /&gt;
        IO: VM -&amp;gt; VAIO -&amp;gt; filter driver -&amp;gt; VAIO -&amp;gt; hardware&lt;br /&gt;
            filter has to send the IO on eventually&lt;br /&gt;
        response: hardware -&amp;gt; VAIO -&amp;gt; filter driver -&amp;gt; VAIO -&amp;gt; VM&lt;br /&gt;
        filter may initiate its own IOs&lt;br /&gt;
        filter may talk to flash or &amp;quot;other&amp;quot; that is recognized as a block device&lt;br /&gt;
        filters may share kernel space memory or can use IP sockets&lt;br /&gt;
        enents indicate when a snapshot or vmotion occurs&lt;br /&gt;
        only in C&lt;br /&gt;
        need both 32 &amp;amp; 64 bit version, because esx is a 32bit OS with a 64bit process space&lt;br /&gt;
        one instance per VMX, must be re-entrant&lt;br /&gt;
    EMC recoverpoint is partner - to be in their 2015 release&lt;br /&gt;
    SanDisk VAIO server side caching&lt;br /&gt;
        scalable, distributed r/w cache&lt;br /&gt;
    beta Q4 2014 with ESX6.0&lt;br /&gt;
        filters must be certified (signed by vmware)&lt;br /&gt;
    expect GA early in 2015 (depends on ESX6)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8/26/14 (at VMWorld) ==&lt;br /&gt;
&lt;br /&gt;
=== SanDisk cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    virtual SAN - 3-32 nodes share local storage&lt;br /&gt;
        contains vmdks&lt;br /&gt;
    virtual SAN cache&lt;br /&gt;
        30% reserved for write buffer&lt;br /&gt;
        storage policy&lt;br /&gt;
            failure to tolerate setting&lt;br /&gt;
            number disk drives per object (stripe width)&lt;br /&gt;
        design considerations&lt;br /&gt;
            performance&lt;br /&gt;
                #disk groups - speed/capacity tradeoff&lt;br /&gt;
                SSD parameters - ~10% HDD capacity&lt;br /&gt;
                storage policy&lt;br /&gt;
                disk controller - bw, qdepth, pass-thru vs raid0&lt;br /&gt;
            capacity&lt;br /&gt;
                use sd card to install vshpere &amp;amp; free 2 disk slots&lt;br /&gt;
            availability&lt;br /&gt;
    vsan monitoring gui&lt;br /&gt;
        would like to see historical data added&lt;br /&gt;
        used esxtop for that&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== meet the vvol engr team ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Derek Uluski, tech lead&lt;br /&gt;
    Patrick Dirks, Sr. Manager&lt;br /&gt;
    does not work with SRM yet - huge shift for SRM&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== whats next for sds? ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    what are docker linux containers (?)&lt;br /&gt;
    a control abstraction, collecting storage by how it can be used (policies)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== vsan deep dive ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    product goals&lt;br /&gt;
        customer: vsphere admin&lt;br /&gt;
        reduce total cost of ownership (capex &amp;amp; opex)&lt;br /&gt;
        SDS for vmware&lt;br /&gt;
    what is it&lt;br /&gt;
        aggregates local flash &amp;amp; hdds&lt;br /&gt;
        shared ds for all hosts in the cluster&lt;br /&gt;
        no single point of failure&lt;br /&gt;
    scale-out - add nodes&lt;br /&gt;
    scale-up - increase capacity of existing storage&lt;br /&gt;
    3-32 nodes&lt;br /&gt;
    &amp;lt;= 4.4 PB&lt;br /&gt;
    2M IOPs 100% reads, 640K IOPs 70% READS&lt;br /&gt;
    highly fault tolerant&lt;br /&gt;
        resiliency goals in policy&lt;br /&gt;
    a combination of user and kernel code embedded into ESXi 5.5 to reduce latency&lt;br /&gt;
    simple cluster config &amp;amp; mgmt&lt;br /&gt;
        a check box in the new cluster dialog&lt;br /&gt;
        then automatic or manual device selection&lt;br /&gt;
    simplified provisioning for applications&lt;br /&gt;
        pick storage policy for each vm&lt;br /&gt;
    policy parameters&lt;br /&gt;
        space reservation&lt;br /&gt;
        # failures to tolerate&lt;br /&gt;
        # disk stripes&lt;br /&gt;
        % flash cache&lt;br /&gt;
    disk groups&lt;br /&gt;
        1 flash device + 1-7 magneteic disks&lt;br /&gt;
        host has up to 5 groups&lt;br /&gt;
    flash&lt;br /&gt;
        30% write-back buffer&lt;br /&gt;
        70% read cache&lt;br /&gt;
        ~10% of hdd&lt;br /&gt;
    storage controllers&lt;br /&gt;
        good queue depth helps&lt;br /&gt;
        pass-through or RAID0 mode supported&lt;br /&gt;
    network&lt;br /&gt;
        layer 2 multicast must be enabled on physical switches&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8/27/14 (at VMWorld) ==&lt;br /&gt;
&lt;br /&gt;
=== NSX-MH reference design ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    2 flavors of NSX&lt;br /&gt;
    cloud&lt;br /&gt;
        compute - provided by hypervisors&lt;br /&gt;
        storage&lt;br /&gt;
        network &amp;amp; security - provided by NSX&lt;br /&gt;
    NSX-MH is for non-ESXi and/or mix of hypervisors&lt;br /&gt;
        any CMS&lt;br /&gt;
        any compute&lt;br /&gt;
        any storage&lt;br /&gt;
        any network fabric&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== vsan performance benchmarking ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    exchange simulation&lt;br /&gt;
    oltp simulation&lt;br /&gt;
    olio (lots of vms)&lt;br /&gt;
        kept RAM/vm low to reduce vm caching and get vsan traffic&lt;br /&gt;
    analytics&lt;br /&gt;
        single vm/node&lt;br /&gt;
        separate ds and inter-vm networks&lt;br /&gt;
    VPI 2.0 (beta)&lt;br /&gt;
        data collection appliance&lt;br /&gt;
        analyzes live vm IO workloads&lt;br /&gt;
        each vm gets a score as to if it should be in a vsan cluster&lt;br /&gt;
    configuring for performance&lt;br /&gt;
        ssd:md ratio so ssd holds most of working set&lt;br /&gt;
    stripe width&lt;br /&gt;
        adjust if % of vms ios being served …&lt;br /&gt;
    I wonder if they have looked into the impact of using DCE components?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8/28/14 (at VMWorld) ==&lt;br /&gt;
&lt;br /&gt;
=== Quid - augmented intelligence (vCenter) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    SSO&lt;br /&gt;
        ability to view multiple vCenters from one place&lt;br /&gt;
        multiple identity sources&lt;br /&gt;
        ability to use different security policies&lt;br /&gt;
    web client&lt;br /&gt;
    inventory service&lt;br /&gt;
        cache inventory information from the vpxd&lt;br /&gt;
        allows other products to show up in web client&lt;br /&gt;
        they are working on hiding the fact that this exists&lt;br /&gt;
    vCenter server&lt;br /&gt;
        vpxd&lt;br /&gt;
            communicates with hypervisors&lt;br /&gt;
            records stats&lt;br /&gt;
            services client requests&lt;br /&gt;
        Vctomcat&lt;br /&gt;
            health&lt;br /&gt;
            SRS - stats reporting service&lt;br /&gt;
            EAM - ESX Agent Manager&lt;br /&gt;
        log broswer&lt;br /&gt;
        PBSM&lt;br /&gt;
            SMS + policy engines&lt;br /&gt;
            services storage views client requests&lt;br /&gt;
    resource usage&lt;br /&gt;
        java processes for all these services, except vpxd&lt;br /&gt;
    performance&lt;br /&gt;
        biggest issue: resource requirements&lt;br /&gt;
    may need to tune JVM heap size accouring to inventory size&lt;br /&gt;
    minimum system configurations are just that&lt;br /&gt;
    embedded db for inventory service&lt;br /&gt;
        requires 2-3K IOPs, depending on load&lt;br /&gt;
        place on its own spindles, possibly ssds&lt;br /&gt;
    heaps&lt;br /&gt;
        must be tuned manually&lt;br /&gt;
    db performance&lt;br /&gt;
        vc stores statstics at 5-min intervals&lt;br /&gt;
        vc saves config changes&lt;br /&gt;
        vc answers certain client queries&lt;br /&gt;
        vc persists version&lt;br /&gt;
        rolls up stats - 30 min, 2 hours, 1 day&lt;br /&gt;
        purges stats&lt;br /&gt;
        purges events (if auto-purge is enabled, which is recommended)&lt;br /&gt;
        purges tasks (...)&lt;br /&gt;
        topN computation - 10 min, 30 min, 2 hrs, 1 day&lt;br /&gt;
        SMS data refresh - 2 hrs&lt;br /&gt;
        vc-to-db latency important (often more so than esx-to-vc latency)&lt;br /&gt;
            place db and vc close&lt;br /&gt;
        db traffic is mostly writes&lt;br /&gt;
        manage db disk growth&lt;br /&gt;
            ~80-85% is stats, events, alarms, tasks&lt;br /&gt;
            ~10-15% is inventory data&lt;br /&gt;
        640 concurrent operations supported, after that queued&lt;br /&gt;
        2000 concurrent sessions max&lt;br /&gt;
        8 provisioning operations/host at a time&lt;br /&gt;
            so when cloning, can use multiple identical sources to increase the concurrency&lt;br /&gt;
        128 vmontions/host at a time&lt;br /&gt;
        8 storage vmotions/host at a time&lt;br /&gt;
        limits can be changed but not officially supported&lt;br /&gt;
    beyond vc5.0&lt;br /&gt;
        5.1 &amp;amp; 5.5: stats tables are partitioned&lt;br /&gt;
    stats level&lt;br /&gt;
        level 2 uses 4x more db activity than level 1&lt;br /&gt;
        level 3 uses 6x more than level 2&lt;br /&gt;
        level 4 uses 1.4 more than level 3&lt;br /&gt;
        use vc stats calculator&lt;br /&gt;
        VCOps can be used for more advanced stats&lt;br /&gt;
    API performance&lt;br /&gt;
        powerCLI - simple to use, but involves client-side filtering&lt;br /&gt;
    web client&lt;br /&gt;
        C# client uses aggressive refresh of client data&lt;br /&gt;
        web client decouples client requests from vpxd&lt;br /&gt;
        3x less load than C# cleint&lt;br /&gt;
        make it easier for clients to write plugins - by adding data to inventory service&lt;br /&gt;
        merge on-premise and hybrid experience&lt;br /&gt;
        platform independence&lt;br /&gt;
        reduced refresh frequency&lt;br /&gt;
        leverages flex&lt;br /&gt;
        issues&lt;br /&gt;
            flex has issues&lt;br /&gt;
            performance - login time, ...&lt;br /&gt;
            different nav model (they tried to hide things that were used less)&lt;br /&gt;
            resource requirements&lt;br /&gt;
        performance&lt;br /&gt;
            chrome/IE faster than firefox&lt;br /&gt;
            browser machine should have 2 CPUs &amp;amp; 4GB&lt;br /&gt;
            browser, app server, &amp;amp; inventory server should be in the same geography&lt;br /&gt;
                can RDP to a local browser server&lt;br /&gt;
            size heaps&lt;br /&gt;
    looking ahead&lt;br /&gt;
        putting tasks back in their place&lt;br /&gt;
        right click will work like it used to&lt;br /&gt;
        improve lateral nav&lt;br /&gt;
    deployment&lt;br /&gt;
        single vs multiple vCenters&lt;br /&gt;
        single reduces latency but requires fully-resourced vm&lt;br /&gt;
        vCenter performance is sensitive to vc-to-esx latency&lt;br /&gt;
        sweet spot is 200H,2000VMs per vc&lt;br /&gt;
        seperate by&lt;br /&gt;
            departments&lt;br /&gt;
            pci/non-pci rackeds&lt;br /&gt;
            server/desktop workloads&lt;br /&gt;
            geographies&lt;br /&gt;
    SSO and linked mode&lt;br /&gt;
        SSO does not share roesprivileges/licenses&lt;br /&gt;
        linked mode&lt;br /&gt;
            allows this&lt;br /&gt;
            uses windows-only technology&lt;br /&gt;
            slower login&lt;br /&gt;
            slower search&lt;br /&gt;
            one slow vc can slow everything&lt;br /&gt;
        blogger #vCenterGuy&lt;br /&gt;
    future&lt;br /&gt;
        linux appliance with performance/feature parity with windows&lt;br /&gt;
        html5&lt;br /&gt;
        cross-vc vmotion&lt;br /&gt;
        linked sso convertence&lt;br /&gt;
        performance&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kenneth</name></author>
	</entry>
</feed>