Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all 178545 articles
Browse latest View live

Older versions of Windows NT

$
0
0

Hi

 

Has anybody been able to install older (pre-4.0) versions of NT under VMWare Fusion? I get these errors when trying to install them:

 

NT 3.51: Installs and runs, but reports a failed service and network is non-operable

NT 3.5: Gives a stop error when using SCSI hard drive, or "cannot load keyboard layout" with IDE hard drive

NT 3.1: Gives a cryptic "ArcPathToDosPath" error when installing

 

I've tried both with and without the patches to allow them to work on post-P5 architecture processors, and it didn't really make much of a difference either way.

 

I could've sworn that I had these working on VMWare Workstation under Windows a while back, so hopefully there's some way to get them working under Fusion too.


Skyline Log Limmit

$
0
0

I seem to notice i am only able to process two logs request at a time.   Is this by design?  if i que up multiple log request they all fail with the response Log Transfer failed for collector because Task of type GET_SUPPORT_BUNDLE_V2 is currently running.  Now these are on different vcenters. 

 

So is this a hard limit or is it some thing i can change as i up and cpu and memory on the collector.  Or should i deploy a skyline per vcenter?

Workstation 15.2.2 no Removable Devices appear to select?

$
0
0

I upgraded Workstation from 15.2.1 to 15.2.2 / The Removable Device List is nearly empty now.  How do I make a device needed for the Virtual machine appear on the list?

Unable to update to the latest Windows 10 Insider build

$
0
0

I have run into this... Is this fixable with another scsi controller and if yes, which one should I use?

Screenshot 2020-03-23 at 10.47.27.png

Deploy Tunnel app to windows desktop

$
0
0

Dears,

 

Is there any way to deploy VMware tunnel app to windows desktop through the Catalog application as we have 1000 users and we need to deploy the tunnel

 

Thanks,

NTP Servers

$
0
0

Being a novice to vSphere/vCenter, I am not sure the proper steps to setting up NTP Servers. Is there any good documentation out there? Or is someone able to provide me so assistance? I have 2 Hosts and neither have NTP servers setup. Thank you.

 

NTP1.png

Integrate VMware Integrated Openstack with more than one syslog

$
0
0

Author : Daniel Clow

URL : http:////docs.vmware.com/en/VMware-Integrated-OpenStack/5.1/com.vmware.openstack.install.doc/GUID-1A82ED4C-08BC-4189-B075-883174272E70.html

Topic Name : Integrate VMware Integrated OpenStack with vRealize Log Insight

Publication Name : VMware Integrated OpenStack Installation and Configuration Guide

Product/Version : VMware Integrated OpenStack/5.1

Question :

The documentation states how to configure single syslog destination, namely vRLI. How to send all events to more than one destination in order to eliminate SPoF?

Error: Detected an invalid snapshot configuration.

$
0
0

Bonjour;

lors de la réplication de notre backup server nous avons une erreur qui surviennent systématiquement.

"Error: Detected an invalid snapshot configuration".

 

J'ai regarder sur VMware et il n'y a pas de snapshot sur la VM (il y en a des anciens sur les replicas).

Doit-je supprimer les snapshots des replicas ?

Doit-je supprimer les replicas de la VM.

 

Merci d'avance pour vote aide.


vmware 14 player keeps asking for password when attempting to download / update vmware tools!!!!

$
0
0

My vmware ver and build:      ver 14.1.7 build-12989993

I am running linux kernel:      4.9.200-antix.1-amd64-smp

 

I have already set my Connection Settings to "No Proxy".

For now I have to manually download the tools!

 

Any fix on this bug?

Boxer Email Connection Errors

$
0
0

I'm having users suddenly having problems with enrolling their devices in Boxer email. My users have enrolled in the AirWatch Intelligent Hub and are pre-configured to connect to our Secure Email Gateway (SEG) offered by VMWare. During initial setup, we were able to get this to connect perfectly fine and test connections were correct. However, now about 2 months later, users are having errors getting a "The host address provided is not valid." error. This is only happening for users who are trying to newly enroll; users who were already enrolled are continuing to receive their email service fine. Running test connections in the AirWatch dashboard also typically states "Test is unsuccessful" with the "Connectivity from AirWatch to SEG" section marked off. Sometimes, however, this also reports OK and gets a test succeeded every once in a while, but it's rare. I haven't recalled making any changes to configuration however would be open to suggestions on how to ensure my users can connect successfully again.

 

Thank you all in advance.

Add Edge VM, not able to see N_VDS

$
0
0

NSX-T 2.4

I'm adding an edge VM to an host prepared for NSX-T that is in the overlay Transport Zone.

Adding the N_VDS to the edge VM and selecting uplink1, on the dropdown menu I see anly VSS portgroups and I do not see any N-VDS segment.

Whats wrong ?

 

Thanks in advance

What happens when license limit exceeds?

$
0
0

What happens when the license limit exceeds in Workspace One (device license)?

25 license limit but more than 25 devices are online.

Of course we've ordered new licenses, but this will take some time (reseller).

Thanks.

Network Readiness - Part 1: Physical Network MTU

$
0
0

Dear readers

Welcome to a new series of blogs talking about the network readiness. As you might be already aware, NSX-T requires from the physical underlay network mainly two things:

  • IP Connectivity– IP connectivity between all components of NSX-T and compute hosts. This includes on one hand the Geneve Tunnel Endpoint (TEP) interfaces and an other management interfaces (typically vmk0) on hosts as well NSX-T Edge nodes (management interface) - both bare metal and virtual NSX-T Edge nodes.
  • Jumbo Frame Support– A minimum required MTU is 1600, however MTU of 1700 bytes is recommended to address the full possibility of variety of functions and future proof the environment for an expanding Geneve header. To get out most of your VMware SDDC your physical underlay network should support at least an MTU of 9000 bytes.

This blog has a focus on the MTU readiness for NSX-T. There are other VMkernel interfaces than for the overlay encapsulation with Geneve, like vSAN or vMotion which perform better with a higher MTU. So we keep this discussion on the MTU more generally. Physical network gear vendors, like Cisco with the Nexus Data Center switch family typically support a MTU of 9216 bytes. Other vendors might have the same MTU upper size.

 

This blog is about the correct MTU configuration and the verification within the Data Center spine-leaf architecture with Nexus 3K switches running NX-OS. Lets have a look to a very basic and simple lab spine-leaf topology with only three Nexus N3K-C3048TP-1GE switches:

Lab Spine Leaf Topology.png

Out of the box, the Nexus 3048 switches are configured with a MTU of 1500 bytes only. For an MTU of 9216 bytes we need to configure three pieces.

  • Layer 3 Interfaces MTU Configuration – This type of interface is used between the Leaf-10 and the Borderspine-12 switch respective between the Leaf-11 and Borderpine-12 switch. We run on this interface OSPF to announce the Loopback0 interface for the iBGP peering connectivity. As example the MTU Layer 3 interface configuration on interface e1/49 from the Leaf-10 is shown below:
Nexus 3048 Layer 3 Interface MTU Configuration

NY-N3K-LEAF-10# show run inter e1/49

---snip---

interface Ethernet1/49

  description **L3 to NY-N3K-BORDERSPINE-12**

  no switchport

  mtu 9216

  no ip redirects

  ip address 172.16.3.18/30

  ip ospf network point-to-point

  no ip ospf passive-interface

  ip router ospf 1 area 0.0.0.0

NY-N3K-LEAF-10#

 

  • Layer 3 Switch Virtual Interfaces (SVI) MTU Configuration – This type of interface is required as example to establish an IP connectivity between the Leaf-10 and Leaf-11 switches when the interfaces between the Leaf switches are configured as Layer 2 interfaces. We are using a dedicated SVI for VLAN 3 for the OSPF neighborship and the iBGP peering connectivity between the Leaf-10 and Leaf-11. In this lab topology are the interfaces e1/51 and e1/52 configured as dot1q trunk to carry multiple VLANs (including VLAN 3) and these to interfaces are combined into a portchannel running LACP for redundancy reason. As example the MTU configuration of the SVI for VLAN 3 from the Leaf-10 is shown below:
Nexus 3048 Switch Virtual Interface (SVI) MTU Configuration

NY-N3K-LEAF-10# show run inter vlan 3

---snip---

interface Vlan3

  description *iBGP-OSPF-Peering*

  no shutdown

  mtu 9216

  no ip redirects

  ip address 172.16.3.1/30

  ip ospf network point-to-point

  no ip ospf passive-interface

  ip router ospf 1 area 0.0.0.0

NY-N3K-LEAF-10#

 

  • Global Layer 2 Interface MTU Configuration – This global configuration is required for this type of Nexus switches and a few other Nexus switches (please see footnote 1 for more details). This Nexus 3000 does not support individual Layer 2 interface MTU configuration; the MTU for Layer 2 interfaces must be configured via a network-qos policy command. All interfaces configured as access or trunk port for host connectivity and as well for the dot1q trunk between the Leaf switches (e1/51 and e1/52) requires the network-qos configuration as shown below:
Nexus 3048 Global MTU QoS Policy Configuration

NY-N3K-LEAF-10#show run

---snip---

policy-map type network-qos POLICY-MAP-JUMBO

  class type network-qos class-default

   mtu 9216

system qos

  service-policy type network-qos POLICY-MAP-JUMBO

NY-N3K-LEAF-10#

 

The network-qos global MTU configuration needs to be verified with the command as shown below:

Nexus 3048 Global MTU QoS Policy Verification

NY-N3K-LEAF-10# show queuing interface ethernet 1/51-52 | include MTU

HW MTU of Ethernet1/51 : 9216 bytes

HW MTU of Ethernet1/52 : 9216 bytes

NY-N3K-LEAF-10#

 

The verification of the end-to-end MTU of 9216 bytes within the physical network should be done already typically before you attach your first hypervisor ESXi hosts. Please keep in mind, the virtual distributed switch (vDS) and the NSX-T N-VDS (e.g uplink profile MTU configuration) supports today up to 9000 bytes. This MTU includes the overhead for the Geneve encapsulation. As you could see in the table below of an ESXi host, the MTU is set to the maximum of 9000 bytes for the VMkernel interfaces used for Geneve (we label it unfortunately still with vxlan) respective for vMotion and IP storage.

ESXi Host MTU VMkernel Interface Verification

[root@NY-ESX50A:~] esxcfg-vmknic -l

Interface  Port Group/DVPort/Opaque Network        IP Family IP Address      Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type     NetStack           

vmk0       2                                       IPv4      172.16.50.10    255.255.255.0   172.16.50.255   b4:b5:2f:64:f9:48 1500    65535     true    STATIC   defaultTcpipStack  

vmk2       17                                      IPv4      172.16.52.10    255.255.255.0   172.16.52.255   00:50:56:63:4c:85 9000    65535     true    STATIC   defaultTcpipStack  

vmk10      10                                      IPv4      172.16.150.12   255.255.255.0   172.16.150.255  00:50:56:67:d5:b4 9000    65535     true    STATIC   vxlan              

vmk50      910dba45-2f63-40aa-9ce5-85c51a138a7d    IPv4      169.254.1.1     255.255.0.0     169.254.255.255 00:50:56:69:68:74 1500    65535     true    STATIC   hyperbus           

vmk1       8                                       IPv4      172.16.51.10    255.255.255.0   172.16.51.255   00:50:56:6c:7c:f9 9000    65535     true    STATIC   vmotion            

[root@NY-ESX50A:~]

 

For sure, the verification of the end-to-end MTU between two ESXi hosts I still highly recommend by sending VMkernel pings with the don't-fragment bit set (e.g. vmkping ++netstack=vxlan -d -c 3 -s 8972 -I vmk10 172.16.150.13).

 

But for a serious end-to-end MTU 9216 physical network verification we need to look for another tool than the VMkernel ping. In my case I just using BGP running on the Nexus 3048 switches. BGP is running on the top of TCP and TCP support the option "Maximum Segment Size" to maximize the TCP datagrams.

 

The TCP Maximum Segment Size (MSS) is a parameter of the options field of the TCP header that specifies the largest amount of data, specified in bytes. This information is part of the SYN TCP three-way handshake, as the diagram below shows from a wireshark sniffer trace.

Wireshark-MTU9216-MSS-TCP.png

The TCP MSS defines the maximum amount of data that an IPv4 endpoint is willing to accept in a single TCP/IPv4 datagram. RFC879 explicit mention that MSS counts only data octets in the segment, but it does not count the TCP header or the IP header. In the wireshark trace example the two IPv4 endpoints (Loopback 172.16.3.10 and 172.16.3.12) have accepted an MSS of 9176 bytes on a physical Layer 3 link with MTU 9216 during the TCP three-way handshake. The difference of 40 bytes is based on the default TCP header of 20 bytes and IP header of again 20 bytes.

Please keep in mind, a small MSS values will reduce or eliminate IP fragmentation for any TCP based application, but will result in higher overhead. This is also truth for BGP messages.

BGP update messages carry all the BGP prefixes as part of the Network Layer Reachability Information (NLRI) Path Attribute. In regards for an optimal BGP performance in a spine-leaf architecture running BGP, it is advisable to set the MSS for BGP to the maximum value but avoid fragmentation. As defined RFC879 all IPv4 endpoints are required to handle an MSS of 536 bytes (=MTU 576 bytes minus 20 bytes for TCP Header*** minus 20 bytes IP Header).

But are these Nexus switches using MSS of 536 bytes only? Nope!

These Nexus 3048 switches running NX-OS 7.0(3)I7(6) are by default configured to discover the maximal MTU path between the two IPv4 endpoints leveraging Path MTU Discovery (PMTUD) feature. Other Nexus switches may requires the configuration of the global command "ip tcp path-mtu-discovery" to enable PMTUD.

 

MSS is sometimes mistaken for PMTUD. MSS is a concept used by TCP in the Transport Layer and it specifies the largest amount of data that a computer or communications device can receive in a single TCP segment. While PMTUD is used to specifies the largest packet size that can be sent over this path without suffering fragmentation.

 

But how we could verify the MSS used for the BGP peering session between the Nexus 3048 switches?

Nexus 3048 switches running NX-OS software allows the administrator to check the MSS of the TCP BGP session with the following command: show sockets connection tcp details.

Below we see two TCP BGP sessions between the IPv4 endpoints (Switch Loopback Interfaces) and each of the session shows a MSS of 9164 bytes.

BGP TCP Session Maximum Segment Size Verification

NY-N3K-LEAF-10# show sockets connection tcp local 172.16.3.10 detail

 

---snip---

 

Kernel Socket Connection:

State      Recv-Q Send-Q        Local Address:Port          Peer Address:Port

 

ESTAB      0      0               172.16.3.10:24415          172.16.3.11:179    ino:78187 sk:ffff88011f352700

 

     skmem:(r0,rb262144,t0,tb262144,f0,w0,o0) ts sack cubic wscale:2,2 rto:210 rtt:12.916/14.166 ato:40 mss:9164 cwnd:10 send 56.8Mbps rcv_space:18352

 

 

ESTAB      0      0               172.16.3.10:45719          172.16.3.12:179    ino:79218 sk:ffff880115de6800

 

     skmem:(r0,rb262144,t0,tb262144,f0,w0,o0) ts sack cubic wscale:2,2 rto:203.333 rtt:3.333/1.666 ato:40 mss:9164 cwnd:10 send 220.0Mbps rcv_space:18352

 

 

NY-N3K-LEAF-10#

Please reset always the BGP session when you change the MTU, as the MSS is only discovered during the initial TCP three-way handshake.

 

The MSS value of 9164 bytes confirms that the underlay physical network is ready with an end-to-end MTU of 9216 bytes. But why is the MSS value (9164) of BGP 12 bytes smaller than the TCP MSS value (9176) negotiated during the TCP three-way handshake?

Again, in many TCP IP stacks implementation we could see a MSS of 1460 bytes with the interface MTU of 1500 bytes respective a MSS of 9176 bytes for a interface MTU of 9216 bytes (40 bytes difference) , but there are other factors that can change this. For example, if both sides support RFC 1323/7323 (enhanced timestamps, windows scaling, PAWS***) this will add 12 bytes to the TCP header, reducing the payload to 1448 bytes respective 9164 bytes.

And indeed, the Nexus NX-OS TCP/IP stacks used for BGP supports by default the TCP enhanced timestamps option and leverage the PMTUD (RFC 1191) feature to handle the 12 byte extra room and hence reduce the maximal payload (payload in our case is BGP) to a MSS of 9164 bytes.

 

The below diagram from a wireshark sniffer trace confirms the extra 12 byte used for the TCP timestamps option.

Wireshark-TCP-12bytes-Option-timestamps.png

Hope you had a little bit fun reading this small Network Readiness write-up.

 

Footnote 1: Configure and Verify Maximum Transmission Unit on Cisco Nexus Platforms - Cisco

** 20 bytes TCP Header is only correct when default TCP header options are used, RFC 1323 - TCP Extensions for High Performance and replaced by RFC 7323 - TCP Extensions for High Performance  defines TCP extension which requires up to 12 bytes more.

*** PAWS = Protect Against Wrapped Sequences

 

Software Inventory:

vSphere version: VMware ESXi, 6.5.0, 15256549

vCenter version:6.5.0, 10964411

NSX-T version: 2.5.1.0.0.15314288 (GA)

Cisco Nexus 3048 NX-OS version: 7.0(3)I7(6)

 

Blog history:

Version 1.0 - 23.03.2020 - first published version

Horizon View Migration to new Server

$
0
0

Hey everyone,

 

hopefully everybody is fine in that current situation!

 

Currently we run a Horizon View Environment with following settings

 

1 Security Server paired with 1 Connection Server - for serving connections coming from outside our internal network (Cert DNS name resolves to Security Server IP if Client connects from outside)

1 non-paired Connection Server for handling internal connection requests (Cert DNS Name resolves with IP of this Connection Server when Client connects from internal network)

 

The plan is to move to completely new 2 Connection Server with latest Windows OS and almost latest Horizon Server Software

Deploy UAG instead of Security Server (probably 2-Arm config).

 

Now I thought the easiest way would be to just install a new CS, then do a replica and deploy the UAG. But after flying across some documents, I doubt that this is even possible.

 

Deploying the UAG, meaning just "install" it should be no problem or have any negative interaction with current installation, as long I do not add it as a Gateway - that's how I understand it.

 

But what's the best and easiest way, migrating or just deploy the new connection servers with less downtime and the possibility to test how everything is running with the uag?

 

Thanks for some helpful responses. Stay healthy!

 

Best regards

Enrico

Get-Stat returns too many values even with MaxSamples set

$
0
0

I was having an issue with Get-Stat pulling any single value.  This command was run against 800 Esxi hosts and most took only seconds to respond on many hosts; However, I was seeing some take as long as 2 minutes....

Get-Stat -Entity ($vmHost) -start (get-date).AddDays(-1) -Finish (Get-Date) -IntervalMins 5 -stat cpu.usage.average

 

After some investigation, I found the hosts that take minutes are returning multiple 'instances' of the same value and instead of returning a few hundred records, would return 12,000 records, even for past day 5 min intervals. 

 

Thinking I could add the -MaxSamples to limit the result, I changed the command to this....

Get-Stat -Entity ($vmHost) -start (get-date).AddDays(-1) -Finish (Get-Date) -IntervalMins 5 -MaxSamples 300 -stat cpu.usage.average

or even this....

Get-Stat -Entity ($vmHost) -start (get-date).AddDays(-1) -Finish (Get-Date) -IntervalMins 5 -MaxSamples 8 -stat cpu.usage.average

 

However, I still get thousands of records back and many minutes to respond on certain ones.

 

On the fast hosts, the MaxSamples works and limits the number of records, but on the hosts that are slow, the MaxSamples seems to be ignored and still provides multiple "instances" of the object.   I have tried -Instance "" and again, the ones that are fast will respect the filter, but the ones that are slow do not.

 

This is a sample what I get regardless of the settings....

 

MetricId                             Timestamp                          Value Unit     Instance

--------                                 ---------                                  -----   ----      --------

cpu.usage.average       3/23/2020 2:05:00 PM                0.32 %        5

cpu.usage.average       3/23/2020 2:00:00 PM                0.15 %        5

cpu.usage.average       3/23/2020 1:55:00 PM                0.31 %        5

cpu.usage.average       3/23/2020 1:50:00 PM                0.54 %        5

cpu.usage.average       3/23/2020 1:45:00 PM                0.06 %        5

cpu.usage.average       3/23/2020 2:20:00 PM                0.12 %        6

cpu.usage.average       3/23/2020 2:15:00 PM                0.35 %        6

cpu.usage.average       3/23/2020 2:10:00 PM                0.49 %        6

cpu.usage.average       3/23/2020 2:05:00 PM                0.36 %        6

cpu.usage.average       3/23/2020 2:00:00 PM                 0.6 %        6

cpu.usage.average       3/23/2020 1:55:00 PM                0.26 %        6

cpu.usage.average       3/23/2020 1:50:00 PM                0.29 %        6

cpu.usage.average       3/23/2020 1:45:00 PM                0.75 %        6

cpu.usage.average       3/23/2020 2:20:00 PM                0.21 %        7

cpu.usage.average       3/23/2020 2:15:00 PM                0.42 %        7

cpu.usage.average       3/23/2020 2:10:00 PM                0.12 %        7

cpu.usage.average       3/23/2020 2:05:00 PM                0.11 %        7

cpu.usage.average       3/23/2020 2:00:00 PM                0.28 %        7

cpu.usage.average       3/23/2020 1:55:00 PM                0.11 %        7

cpu.usage.average       3/23/2020 1:50:00 PM                0.15 %        7

cpu.usage.average       3/23/2020 1:45:00 PM                0.33 %        7

cpu.usage.average       3/23/2020 2:20:00 PM                0.23 %        8

cpu.usage.average       3/23/2020 2:15:00 PM                0.33 %        8

cpu.usage.average       3/23/2020 2:10:00 PM                0.18 %        8

cpu.usage.average       3/23/2020 2:05:00 PM                 0.2 %        8

cpu.usage.average       3/23/2020 2:00:00 PM                0.06 %        8

cpu.usage.average       3/23/2020 1:55:00 PM                0.06 %        8

cpu.usage.average       3/23/2020 1:50:00 PM                0.09 %        8

cpu.usage.average       3/23/2020 1:45:00 PM                1.18 %        8

cpu.usage.average       3/23/2020 2:20:00 PM                0.11 %        9


Upgrades: Standard switch and iscsi target to distribute switch and vsan

$
0
0

Hi to all,

 

and sorry in advance for my English.

 

Current configuration:

- 3 x Server Esxi 6.7 in one vCluster in one vDataCenter

- Each host with 4x 1Gb NICs + 2x 10Gb Nics

- Each host with 8x SSD

- vCenter's VM inside the cluster

 

- 10GB NICs still unused

- SSDs still unused

 

- 1 standard vSwitch0 with 2 x 1Gb NICs for management and vmotion traffic (crossing active and standby in portgroups).

- 1 standard vSwitch1 with 1 x 1Gb NIC for iSCSI traffic. (standby NIC removed recently, after an issue with additional NIC)

- 1 standard vSwitch2 with 1 x 1Gb NIC for VM traffic and different portgroup vlan isolated (standby NIC removed recently, after an issue with additional NIC)

- Two physical switch 1GB based with 10GB uplink. 2 x 1Gb NIC of each host in first switch. 2 x 1Gb NIC of each host in second switch

- Two physical switch 10GB based. 1 x 10Gb NIC of each host in first switch. 1 x 10Gb NIC of each host in second switch.

 

Goals:

Move to distributed vSwitch

Move to vSan and dismiss iSCSI target

 

Now, please can someone help me to understand the best steps to complete the "migrations" with the least number of downs/issues... i hope!

 

Based on my knowledge, steps in my intentions:

1. New vDS with 2x 10Gb NIC as uplink and NIOC enabled

2. New portgroups in vDS, same as in current vSwitch2

3. Move all VM traffic from vSwitch2 to new vDS. (No more point of failure for VM traffic)

4. Remove vSwitch2 and add its NIC to vDS as third uplink

5. New portgroup in vDS for vMotion traffic

6. Move vMotion traffic to new vDS

7. Remove one NIC from vSwitch0 and add it to vDS as fourth uplink

 

Assuming that so far everything is correct, which the next steps?

 

How to go ahead with vSan and its traffic?

How manage the vCenter's VM that, i think, cannot be connected to vDS?

One ore more vDS?

 

Hope i was clear in my explanation.

 

Thanks in advance to all.

 

Regards.

 

Massimo

Big ISO media file upload to a catalog issue

$
0
0

Hi guys,

 

I'm experiencing an issue when I try to upload a huge ISO media file to a catalog, due to the fact the whole operation is quite extensive in time so I reach some sort of time limit somewhere and I try to figure out which one exactly. All settings defined in vCloud Director Administration/General/Timeouts look good, e.g. big numbers enough, allowing for a transfer or a session to not be interrupted if less than 480 minutes for example.

 

The problem occurs at the end of the 2nd stage transfer (the import step), once the file is getting moved from /opt/vmware/vcloud-director/data/transfer to the actual location on the NFS storage I use.

That last part of the transfer seems to take more than 60 minutes of time and for me looks like the reason so it doesn't complete successfully. The exact time passed as per the logs is  3,798,255 ms, which means about 63 minutes or just over 60, so the operation fails. Just trying to understand where's that 60 minutes (or other threshold) defined and how I can lift it up a bit.

 

The exact error message is:

Session has been opened for too long: storage-fabric-activity-pool-1353. Opened since: 3/23/20 3:09 AM

 

 

 

vcloud-container-debug.log.9:2020-03-23 04:12:52,432 | WARN     | storage-fabric-activity-pool-1353 | Conversation                   | Session has been opened for too long: storage-fabric-activity-pool-1353. Opened since: 3/23/20 3:09 AM. [Conversation: 4fb93070-c551-4782-a56d-5bdb25ac6a17, transaction fcec9d60-4029-4fb8-ae41-36b5b8880138, transactionDepth: 1] | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

vcloud-container-debug.log.9:2020-03-23 04:12:52,486 | DEBUG    | storage-fabric-activity-pool-1353 | CreateFromImportActivity       | [Activity Execution] Partial failure during Activity execution - Handle: urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243, Current Phase: CreateFromImportActivity$CopyPhase | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

vcloud-container-debug.log.9:2020-03-23 04:12:52,503 | DEBUG    | storage-fabric-activity-pool-1353 | CreateFromImportActivity       | [Activity Execution] Finished - Handle: urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243, Current Phase: CreateFromImportActivity$CopyPhase | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

vcloud-container-debug.log.9:2020-03-23 04:12:52,503 | DEBUG    | storage-fabric-activity-pool-1353 | CreateFromImportActivity       | [Activity Execution] Phase CopyPhase completed in 3,798,255 ms. - Handle: urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243, Current Phase: CreateFromImportActivity$CopyPhase | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

vcloud-container-debug.log.9:2020-03-23 04:12:52,503 | DEBUG    | storage-fabric-activity-pool-1353 | CreateFromImportActivity       | [Activity Execution] Activity completed in 3,798,255 ms. - Handle: urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243, Current Phase: CreateFromImportActivity$CopyPhase | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

vcloud-container-info.log:2020-03-23 04:12:52,432 | WARN     | storage-fabric-activity-pool-1353 | Conversation                   | Session has been opened for too long: storage-fabric-activity-pool-1353. Opened since: 3/23/20 3:09 AM. [Conversation: 4fb93070-c551-4782-a56d-5bdb25ac6a17, transaction fcec9d60-4029-4fb8-ae41-36b5b8880138, transactionDepth: 1] | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

 

 

 

 

 

 

vcloud-container-info.log:2020-03-23 04:12:52,408 | ERROR    | storage-fabric-activity-pool-1355 | FutureUtil                     | Future /vcd01/media/1992ea7e-8949-46d4-b67d-da866dd13d28/cb937566-3564-419e-8c4f-2cdb1ca9943f/media-37441614-ac3f-4e88-8ee3-d57ba6d170e7.iso failed | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid 2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.copy.impl.UploadDatastoreFilesActivity,urn:uuid:4fc243d5-6ef8-4266-957b-63c32b7e44dd)

 

 

vcloud-container-info.log:2020-03-23 04:12:52,432 | WARN     | storage-fabric-activity-pool-1353 | Conversation                   | Session has been opened for too long: storage-fabric-activity-pool-1353. Opened since: 3/23/20 3:09 AM. [Conversation: 4fb93070-c551-4782-a56d-5bdb25ac6a17, transaction fcec9d60-4029-4fb8-ae41-36b5b8880138, transactionDepth: 1] | requestId=eea900e4-2f7b-444c-9621-bc646909b85e,request=POST https://xxx.somecloud.com/api/catalog/02ef3f7d-21b6-4b3d-8dea-070be42c02ba/action/upload,requestTime=1584929847356,remoteAddress=10.51.39.252:28472,userAgent=Ruby,accept=application/*+xml;version 29.0 vcd=112e34af-125e-4a5e-9b10-4ed45ae72455,task=b92d8d4f-a321-4e12-9889-cb046e2e354e activity=(com.vmware.vcloud.backendbase.management.system.TaskActivity,urn:uuid:b92d8d4f-a321-4e12-9889-cb046e2e354e) activity=(com.vmware.ssdc.backend.services.impl.CreateMediaActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243) activity=(com.vmware.vcloud.fabric.storage.media.impl.CreateFromImportActivity,urn:uuid:2efd67b4-bcee-45dd-b9b0-6924faca9243)

 

 

Cheers!

VMware Horizon - Documents folder

$
0
0

Hello,

I am new to Vmware and I am trying to either hide the documents folder or some how stop people from using it on their VD. Does anyone know how to accomplish this?

Continuously Collect In-Guest Metrics from Desktop Sessions

$
0
0

I know you can request desktop session process usage on demand. However is there a way to set this to poll continuously on a interval across all active desktop sessions to get a holistic view of what applications are consuming CPU. RAM etc. across a cluster?

Horizon Skype Virtualization - External users have optimization enabled?

$
0
0

Hello all,

 

As with many of you in the recent weeks, we are seeing some unique scenarios when user-base shifts to major remote usage. I have a scenario where our agents have the Skype Virtualization Pack installed, and our thin clients in the office also have it enabled. However, now that almost everyone is working from home, I had expected the Optimization would fallback but that's simply not the case. Obviously, there is no communication between remote client endpoints.

 

Any known configs for disabling skype when the user is external? I could likely do a UEM ADMX policy for external only but would like to hear more.

 

Thanks

Viewing all 178545 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>