VirtualBox

Custom Query (16363 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (1954 - 1956 of 16363)

Ticket Resolution Summary Owner Reporter
#1999 fixed Adding a LUN via 'VBoxManage addiscsidisk' fails when target is a Netapp. David Lightman
Description

Netapp: r200 running Data On Tap 7.2.2

dlight@dlight ~ $ VBoxManage addiscsidisk -server 10.144.40.52 -target iqn.1992-08.com.netapp:sn.50420453 -lun 1 VirtualBox Command Line Management Interface Version 1.6.4 (C) 2005-2008 Sun Microsystems, Inc. All rights reserved.

iSCSI disk created. UUID: 606f234e-c599-4bfa-89bd-1c2c60296fcf

I see the VBox instance "logging" into the iSCSI export on the filer itself: Tue Aug 19 12:17:45 PDT [iscsi.notice:notice]: ISCSI: New session from initiator iqn.2008-04.com.sun.virtualbox.initiator at IP addr 10.100.100.144

Starting the VM after assigning the Virtual Disk to the VM:

Unknown error creating VM. VBox status code: -40 (VERR_TIMEOUT). Result Code: 0x80004005 Component: Console Interface: IConsole {d5a1cbda-f5d7-4824-9afe-d640c94c7dcf}

pcap of iSCSI session available at: http://compnetrx.com/iscsi.dmp

#2000 fixed Optional rollbackwards when deleting a snapshot Terry Ellison
Description

This scenario comes up quite a lot when discussing backup strategies for VMs, where the end users require minimum downtime. A small change to the algo for deletion of middle snapshots will allow effective non-stop backups.

Consider the scenario where we have a running VM:

Snapshot 1    BaseSystem.vdi  R/O  10,356 blocks (1MB allocation blocks)
Current       {xxxxxxx1}.vdi  R/W   1,340 blocks

We now do a live snapshot with hangs the VM for ~20sec. (I am omitting the SAV files from this discussion for simplicity, but these don't materially impact timings):

Snapshot 1    BaseSystem.vdi  R/O  10,356 blocks
Snapshot 2    {xxxxxxx1}.vdi  R/0   1,340 blocks
Current       {xxxxxxx2}.vdi  R/W       0 blocks

We can do a D/R restore from Snapshot 1 + 2, and since the BaseVDI has already been backed up, the (only large file) {xxxxxxx1}.vdi that needs to be gzipped (say) which will take less than a minute, so in a reasonably quiescent system Current ({xxxxxxx2}.vdi) will only have perhaps 10 blocks in it. So now we wish to delete Snapshot 2 to restore our status quo:

Snapshot 1    BaseSystem.vdi  R/O  10,356 blocks
Snapshot 2    {xxxxxxx1}.vdi  R/0   1,340 blocks  <=== To be deleted.
Current       {xxxxxxx2}.vdi  R/W      10 blocks

Currently to do this you need to do 3 steps:

  • Suspend savestate the VM which takes about 15 secs, say
  • Delete Snapshot 2 which copies ~1,330 blocks from {xxxxxxx1}.vdi to {xxxxxxx2}.vdi which takes about 90 secs, say
  • Resume the VM which takes about 15 secs, say

My point is two fold:

  1. Given we can do the savestate in pause mode, why can't we do the snapshot delete in pause-mode as well?
  2. If we have the option to do the delete in the reverse direction copying the 10 blocks from {xxxxxx x2}.vdi to {xxxxxxx1}.vdi, this will take a second or so.

This approach of allowing dynamic backwards deletion of snapshots will allow VMs to be backed up on the fly with two pauses of perhaps 15 + a few seconds, which for most systems can be considered non-stop. If we allow a savestate -nomemory option (that is recovery to this snapshot will force a reboot -- which is probably fine for recovery purposes, the pause will fall to seconds. The current functionality requires stopping the system for an number of minutes.

It is also fairly trivial to do this back copy in three steps to ensure that the VDI is always maintains integrity.

PS. You should allow Host and Guest types N/A

#2001 fixed Virtualbox 1.6.4 segfault when displayed via ssh/X11forward => Fixed in 2.0.4 Jonathan Woithe
Description

During my testing of Virtualbox 1.6.4 (binary edition) I have noticed a problem when connecting to the virtualbox host via ssh with X11 forwarding active. Let's say machine "Ahost" is where I'm sitting and "Bhost" is the machine with virtualbox installed. On Ahost I do

ssh -Y Bhost

which gets me a prompt on Bhost. DISPLAY is set to "localhost:11.0" which is the proxy set up by ssh.

I then start Virtualbox which runs as expected. I select the desired VM and "start" it. At this point it *might* run - if not, the VM status becomes "Aborted" and dmesg reports

VirtualBox[23317]: segfault at 0 ip 00000000 sp bfb37bac error 4 in VirtualBox[8048000+20f000]

If the VM starts then everything runs apparently without trouble until I do a shutdown/poweroff of the VM. The guest OS shuts down normally and the window disappears, but the VM status is again reported as "Aborted". This time dmesg reports

VirtualBox[23511]: segfault at b28db1c8 ip b5d67440 sp bfdd21d0 error 4 in libSDL-1.2.so.0.11.2[b5d4f000+66000]

None of these segfaults occur if I run Virtualbox directly from an xsession running on Bhost (DISPLAY set to ":0.0"). However, curiously enough if I do

xhost +
ssh Bhost

on Ahost, and then do

export DISPLAY=Ahost:0.0
Virtualbox

in the resulting shell running on Bhost the segfaults do not occur and everything seems to perform as expected. Finally, doing

ssh -Y Bhost

*on Bhost* and then running "Virtualbox" gives rise to the problems.

So in other words, it is not the remote display itself which causes the problem, but simply the act of displaying on an X display created by ssh's X11 forwarding feature.

Has anyone else seen this? Are there any known fixes? I'm willing to run tests to assist tracking this down.

Both Ahost and Bhost are running Slackware 12.1 with a 2.6.26 kernel.

Regards

jonathan

Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy