Quantcast
Channel: Symantec Connect - Backup and Recovery
Viewing all 1524 articles
Browse latest View live

General Questions about Backup Exec configuration

$
0
0
I need a solution

Hi,

We're considering Backup Exec as a backup solution and trying to find out if it fits our needs. We have 3x remote sites. Each site has a Windows server (2008 or 2012) and a handful of Windows and Mac clients. I've been flicking through the admin guide but thought it would be quicker to ask some questions here:

1) We want each machine to be able to back up to any server - the one on its local network and either/both of the 2x remote ones - is this possible?

2) We want to be able to select different files for backup on a machine depending on its backup destinations - for example if a machine is backing up to a local and remote server we may exclude some larger files from the remote backup. Is this possible?

3) Can we administer and configure multiple backup servers across different locations from one server? 

4) Can we set up email alerts for failed backups, failed storage and so on?

I hope these are simple questions for someone already using Backup Exec.

Thanks in advance.

Rob

0

Restore job reaches 100% on the status but keeps going.

$
0
0
I need a solution

Setup:

We are utilizing 2 servers to restore; our backup server LABMGT00 and our SQL Server LABDB01. I am running Backup Exec 2014 on a Windows 2008 R2 server, and SQL Server 2012 standard on Windows 2008 R2. This setup is completely isolated and was created to mimic our Production setup.

Issue:

I am trying to restore data we backup from our production server, the size of the job is 1.97 TB and the backup is all on one LTO6 tape (we were using 2 LTO5 tapes with they database split on both, but it was causing too many issues). I am able to inventory and catalog with no problems; When i create a restore job it will reach 100% on the status, but it will continue for an extremely long time. For this particular job it takes about 11 hours to reach 100%, so i let it continue running until it pasted 24 hours.

After I verified the data was there on the SQL Server (Which it was) I tried to hold the job, after 30 minutes it did nothing so I tried to cancel and that did nothing. I ended up stopping the Backup Exec services on LABMGT00. This stopped the active job, and the data was still on LABDB01.

Question:

I tried to restore a smaller job, it was 60.4 Gb, and it was able to reach a status of 100% in 50 minutes, but it took 2 hours before the job stopped running and reported successful. I have not tried to re-run the large job yet, but is this normal? is there anyway to find out why it is taking it so long? since i can see the data after it reaches 100% am i able to add or delete rows?

I am extreamly new to BE, and I only know the few things related to backing up and restoring. So anything, even basic information, is helpful.

Thank you,

Anthony

0

Backing up Linux OS using BRM

$
0
0
I need a solution

Hello everyone.

Someone can help me?

I need to configure BRM to backing up Linux using BRM, is this a good idea?

What do you recommend and how can I start the implementation, exist a guide with best practicies?

Regards..

0

Server recovery - lost disks

$
0
0
I need a solution

I am involved with the recovery of a Windows SBS 2011 server that has serious damage on the boot drive caused by a faulty disk. I've had Microsoft support looking at it most of today and looks like we need a full restore. The server has BE 2012 SBS version, but I can't access it as the server won't boot. I have the recent backups (on NAS box) but I do not have the original BE install disks, the BE serial Number or a BE recovery disk (I didn't set up the server)

Any downloadable ISO of BE that I can install in trial mode or something to access the backups? Alternatively is there any way to convert the existing backups to a virtual machine (again using a trail version)?

thanks

0

HCART confussion.

$
0
0
I need a solution

Hi guys, I am chaning my LTO4 tape library for a new LTO6 one (which I am going to use with LTO4 tapes).

I will keep the old LTO4 tapes for restores, but here is the thing, I have them configured as HCART.

I was reading a post here regarding something similar and I didn´t quite got the difference between HCART, HCART2 and HCART3.

What I understood is that the type of LTO is not associated with the HCART type, so I could have LTO4 as HCART3 for example, and it wouldn´t matter.

I am going to use LTO5 tapes coz I have a remote location with LTO5 drives, and sometimes need to do cross restores.

So let´s see if I got things right.

I have the LTO4 tapes that belong to the library I am removing as HCART tapes, also the remote library has LTO4 tapes as HCART.

For the new LTO5 tapes, I should set the drives as HCART2 or 3, to set them in a different way, so if I have to do a restore from an LTO4 tape, it will go in as HCART, and all new tapes will be HCART2, so they don´t mix and try to write one of those LTO4 tapes.

Is that more or less what should happen, or did I get it all mixed up?

thanks.

0

VM backups failed with EC-6 after upgrade server from 7.6.1.2 to 7.7.1

$
0
0
I need a solution

hello,

VM backups failed with EC-6 after upgrade server from 7.6.1.2 to 7.7.1. before that it was working fine. please provide a solution.

06/02/2016 20:01:32 - begin writing
06/02/2016 20:01:59 - Error bpbrm (pid=22516) from client <client_name>: ERR - Error opening the snapshot disks using given transport mode: san:nbd Status 23

06/02/2016 20:02:00 - Critical bpbrm (pid=22516) from client <client_name>: FTL - cleanup() failed, status 6

06/02/2016 20:02:02 - Error bptm (pid=22539) media manager terminated by parent process
06/02/2016 20:02:09 - Critical bpbrm (pid=22516) unexpected termination of client <client_name>
06/02/2016 20:02:10 - Info bpbkar (pid=0) done. status: 6: the backup failed to back up the requested files
06/02/2016 20:02:10 - end writing; write time: 0:00:38
the backup failed to back up the requested files  (6)

Thanks

0

BE15 - B2D Verify hangs indefinitely, E_PVL_MEDIA_NOT_AVAILABLE

$
0
0
I need a solution

I have a specific job that has now become stuck on Job Status  "Active: Queued - Verify" while backing up to Disk a total of 5 seperate times with the same issue, although each has occured unpredictibly, on different days, weeks apart.

The job will hang in Verifty indefinitely until I manually cancel, and then the duplicate-to-tape job runs, but also hangs at the same Byte Count where the Verify job stopped (during the duplicate phase of the tape job, not the verify phase), and I have to manually cancel that as well.  Ultimately this means we do not have a complete backup of this job on tape.

Digging deeper into the Backup 'Job Activity' log:  the Backup phase includes a full set of B2D media names, and the Verify phase stops listing B2D media at about 3/4 of the Backup media list.  (The B2D media names are different for each of the job failures, so it's not an issue with a specific B2Dxxxxxx.bkf file.)

The job errors have occured with at least 2 weeks to a month between them; not consistent. I have also removed and re-presented this specific disk to BE in between the last two job errors, for unrelated reasons.)

Running the Debugger revealed that indeed the B2D file that would come next in the Verify sequence was somehow caught up in the PVLSVR process:

735    PVLSVR    2940    6/2/2016 9:09:04 AM    4448    AdammSession::MountSpecificMedia()
        Requested Media = {F39F96B9-A134-45F2-B819-12C99A497E4D}

736    PVLSVR    2940    6/2/2016 9:09:04 AM    4448    AdammSession::MountSpecificMedia() - currently mounted
        Media = {F39F96B9-A134-45F2-B819-12C99A497E4D}, "B2D321858"

737    PVLSVR    2940    6/2/2016 9:09:04 AM    4448    PvlSession::MountSpecificMedia() - ERROR = 0xE000810E (E_PVL_MEDIA_NOT_AVAILABLE) - 0000 seconds, 0000 (8364) SQL seconds

738    PVLSVR    2940    6/2/2016 9:09:04 AM    4448    AdammSession::Execute( ADAMM_SESSION_EXECUTE_MOVER_MOUNT_SPECIFIC )
        ERROR = 0xE000810E (E_PVL_MEDIA_NOT_AVAILABLE)

So the PVLSVR is unable to mount B2D321858 for some reason, perhaps because it thinks it is *already* mounted?

Looking at the details of B2D321858, there is nothing remarkable.   No read/write errors, 21 seeks, freshly created for the backup job.

The only vaguely related solution item I found was this article (https://www.veritas.com/support/en_US/article.000038222) ... but that discusses Tape media specifically, and there is apparently no option to Retire disk media in the same way.  Regardless, the problem is not related to a specific B2D file, as this job has hung in the same way on different B2D files each time.

And I don't think it's related to the content of the backup job either, because a different file is listed in the Activity Log each time the job has hung.

I'll restart the BE services, then do an Inventory/Catalog on the disk in question, but without any predictibility of the job failing, it is hard to isolate and experiment.  And interestingly, no other job of our 50+ has this problem:  This is backing up a specific directory on a physical server, but the job settings are very standard and similar if not identical to other physical server jobs.

Perhaps this is a database issue?

Any insights or suggestions would be most appreciated.

Thank you.

0

Sending files to Veritas for technical support cases using https, sftp, ftp


using BE 15 for ARCserve restore

$
0
0
I need a solution

searching for a restore solution (catalog rebuild) for ARCserve has led me to Backup Exec.

i chance upon someone's workaround on what i'm trying to accomplish now. i have a 2004/2005 backup in DDS4 media created by ARCserve 2000. now that i have revive that hardware required for ARCserve, i am stuck on unable to see the sessions to restore (though i can see the media that was created in 12/04/2005).

hopefully, BE will help me achieve my target. if this proves successful, i hope this post gets sticky for those needing to restore using old tech.

0

netbackup unable to resolve hostname, no hostname alias added to pem name manager.

$
0
0
I need a solution

Hi,

I see the below error on my problems report:

netbackup unable to resolve hostname <hostname>, no hostname alias added to pem name manager.

Now there's about 16 of these servers and they have all been decommissioned and they are not associated with any policy and they are not under client's host properties on the GUI. Upon searching under the /usr/openv/netbackup/db/client directory, all 16 of these client folders are still in there. Is it OK to just remove/delete these client folders on this directory? Or is there another way of getting rid of these error?

Regards

0

BE restore job is failing

$
0
0
I need a solution

Submit-BEFileSystemRestoreJob command is failing with error.

command is:

Get-BeJobHistory -FromLastJobRun -Name 'backup_definition_Full_275417-Full' | Submit-BEFileSystemRestoreJob -RedirectToPath C:\temp\tdir13778 -name restore_job87808_11113 -FileSystemSelection 'C:\temp\tdir21680' -Confirm:$false 

error is:

Submit-BEFileSystemRestoreJob : Cannot make a selection for "C:\temp\tdir21680\*" because the directory "C:\temp\tdir21680" is not in the backup set "Wednesday, June 01, 2016 12:46:40 PM (Snapshot Full)".

0

Netbackup Audit

$
0
0
I need a solution

Good afternoon.

I want to enable audit (include enhanced audit) on my master server.

How i can calculate the size disk space on master server, that i need for the audit logs?

0

Driver needed for Exabyte Magnum 2 x 24

$
0
0
I need a solution

I am needing the driver for Exabyte Magnum 2 x 24 Tape Library running on windows 208r2 - backup exec 2010R3.  Can someone point me to the URL for me to download the driver.

0

Force redirected NDMP restore to use correct media server

$
0
0
I need a solution

We are restoring some netapp written tapes to a netapp simulator with a different target directory. Simulator has no tape drives. It all worked perfectly.

Then we properly retired the netapp leaving us with the simulator which then needed rebuilding.

Now when I try and restore to the simulator I have a problem in that the restore tries to use an ndmp host ( EMC) and we know that wont work, we want to force it to do a 3 way so it goes via the tape drives on the master with the netapp simulator does the decoding of the traffic.

06/03/2016 16:33:23 - Info bptm (pid=15852) INF - Waiting for mount of media id 004 on server master for reading.
06/03 /2016 16:33:23 - Error ndmpagent (pid=15869) failed to connect to NDMP server wronghost.us onnect error = -1,003 (Unable to connect to NDMP server)
06/03/2016 16:33:23 - Error ndmpagent (pid=15869) NDMP restore failed from path UNKNOWN
06/03/2016 16:33:23 - Error bptm (pid=15852) Unexpected NDMP message - not a connected message

wronghost.us is another (unused as it happens) ndmp hostname. Cant even see where it grabs this from!

Comms to netapp simulator are perfectly functional. Media has been assigned to master server.

Sol10 7604 master.

Your input appreciated,thanks Jim.

0

Error 200's and query's being invalid

$
0
0
I need a solution

Hello,

We have a scenario where from time to time the VIP policies seem to become corrupt.  \ separators seem to be replaced with %CM.  When this happens the query then reports no backups scheduled to run as it’s not finding any VMs to match the query.

1.       How can we prevent this from happening?
 

Master Server is in 7.7.2

Error Screenshot.png

0

Remote agents crashing or hanging since FP4

$
0
0
I need a solution

Many all of our remote agents on 2008R2 and 2012 R2 servers are crashing since FP4

Remote systems with SQL 2008 R2 seem to be the most common and very frequent.  Here is an image sample of what we see almost every day from a host of servers:

Untitled.jpg

0

Unable to open Java Console - error stating "An unknown error has occurred during interface initialization"

$
0
0
I need a solution

Unable to open Java Console - error stating "An unknown error has occurred during interface initialization"

NBU version: 7.7.1

OS Linux 6.2

We are using NBAC and centrify(5.1) in our environment.

0

Backup Cluster File System

$
0
0
I need a solution

Hi all,

We have four (04) servers running RHEL 7, they are all part of the same Veritas Cluster. They share six (06) File systems (CFS), my question is how can we backup those file systems using the available servers which means, if the four (04) servers were up, we will spread the backup between them all, if a server is down, the backup will be spread between the remaining three (03) servers and so on.

The setup we are looking for does not need any manual intervention.

Thanks in advance.

0

strategy to replace library

$
0
0
I need a solution

Hello,
I like to replace our 10 year old library with a new one and would like to hear your thoughts about the below strategy.

Tasks i see

  • Make new hardware available to netbackup.
  • Find a way to redirect all what currently goes to the old library to the new one.
  • Old drive is LTO3, new is LTO6 so the images on some tapes must be duplicated.
  • Our license is for 1 drive only so we kindly got an eval license (25 days left).

What i have

Policies
NameResidencePoolRetention LVL
EXCH_SVR1_ARDatastoreNetBackup10 (57 days)
FILE_SVR1_ARDatastoreNetBackup10 (57 days)
FILE_SVR2_ARDatastoreNetBackup10 (57 days)
FILE_SVR3_ARDatastoreNetBackup10 (57 days)
FILE_SVR4_ARDatastoreNetBackup10 (57 days)
FILE_LSRV_ARDatastoreNetBackup10 (57 days)
 
EXCH_SVR1_HYearLibSrv-hcart3-robot-tld-0 HalfYearHalfYear9 (Infinity)
FILE_WIND_HYearLibSrv-hcart3-robot-tld-0 HalfYearHalfYear9 (Infinity)
FILE_LINU_HYearLibSrv-hcart3-robot-tld-0 HalfYearHalfYear9 (Infinity)
FILE_EDAI_MANUALLibSrv-hcart3-robot-tld-0 HalfYearEdainst9 (Infinity)
Collective_MANUALLibSrv-hcart3-robot-tld-0 HalfYearCollective9 (Infinity)
 
Online_Hot_Catalog_Backup_Policy---CatalogBackup2 (3 weeks)
Pools
NameComment
Nonethe None pool
NetBackupthe NetBackup pool
DataStorethe DataStore pool # d2t staging
CatalogBackupNetBackup Catalog Backup pool # tape
EdaInstHolds Edai Backups # tape
Collectivevarious 'archives' # tape
HYearBacks up to tape half a year #tape
ScratchHolds New & Expired Tapes

What i already did

  1. Created new Barcode Rule for Cleaning Tapes (as the new library does not hide them from the software)
  2. Created new Bardode Rule for the new  Badge of LTO6 Tapes to go to Scratch pool, type: HCART (as old Tapes were HCART3)
  3. Created new robot TLD1 (functional: can be inventorized)
  4. Created new drive; type: HCART (as old Tapes were HCART3; still waiting for tapes to be delivered)
  5. Created new storage unit with modified density HCART and robot number (not sure why i need it, just duplicated what existed)

What i think i should do

  • Assign two LTO6 Tapes to CatalogBackup Pool, stick one of them into the new library and hope it will be used. Swap next week.
  • Change the staging schedule's final destination to the new robot so that datastore pool stages to that.
  • Change all non-datastore policies from above to use LibSrv-hcart-robot-tld-1 instead of LibSrv-hcart3-robot-tld-0
  • duplicate all images on tapes with retention level 9 (Infinity) to a minimum of LTO 6 tapes, process below:

  NBU will only append to an existing tape if the retention level is the same (fullfilled).
  When you specify destination to duplicate to, you can only specify the storage unit and pool.
  So, if there are other tapes with space left that fit all criteria, NBU may choose a different tape.
  This means that you will have to ensure that the tape you want to duplicate to is the only possible choice.

  Therefore:

  • Assign two tapes to the 'Collective' pool.
  • Remove *all* tapes from new library except one of these two tapes.
  • Remove *all* tapes from old library but the source tape. Suspend the source tape.
  • Duplicate the images.
  • Cool, you are still here wink

Any hints would be welcome...

0

robotic path missed after reboot

$
0
0
I need a solution

Hi all,

after reboot master server my  robotic path missed 

/usr/openv/volmgr/bin/robtest        
Configured robots with local control supporting test utilities:
  TLD(0)     robotic path = /dev/sg76
  TLD(1)     robotic path = /dev/sg23
  TLD(2)     robotic path = MISSING_PATH:QUANTUM275102349_LL0

When I run scan command changer not listed,

output of /var/log/messeges 

Jun  5 17:43:10 mstr-nbkp-srv last message repeated 17 times
Jun  5 17:43:10 mstr-nbkp-srv tldd[4464]: TLD(2) unavailable: initialization failed: Unable to open robotic path
Jun  5 17:43:10 mstr-nbkp-srv tldcd[4912]: TLD(2) [4912] robotic path MISSING_PATH:QUANTUM275102349_LL0 does not exist
Jun  5 17:45:09 mstr-nbkp-srv last message repeated 20 times
Jun  5 17:45:12 mstr-nbkp-srv last message repeated 15 times
Jun  5 17:45:12 mstr-nbkp-srv tldd[4464]: TLD(2) unavailable: initialization failed: Unable to open robotic path
Jun  5 17:45:12 mstr-nbkp-srv tldcd[4912]: TLD(2) [4912] robotic path MISSING_PATH:QUANTUM275102349_LL0 does not exist
Jun  5 17:45:15 mstr-nbkp-srv last message repeated 21 times
Jun  5 17:45:37 mstr-nbkp-srv SQLAnywhere(nb_mstr-nbkp-srv): Starting checkpoint of "NBAZDB" (NBAZDB.db) at Sun Jun 05 2016 17:45
Jun  5 17:45:37 mstr-nbkp-srv SQLAnywhere(nb_mstr-nbkp-srv): Finished checkpoint of "NBAZDB" (NBAZDB.db) at Sun Jun 05 2016 17:45 

Please assist

Thanks

0
Viewing all 1524 articles
Browse latest View live


Latest Images