Quantcast
Channel: Symantec Connect - Backup and Recovery
Viewing all 1524 articles
Browse latest View live

BE 15 fails to backup W2000 VM using BE Agent 2010 R3

$
0
0
I need a solution

After upgrading BE 2012 to BE15, I'm unable to Backup the Windows 2000 VMs using Backup Exec Agent for Windows (the last BE agent supporting W2000 is BE 2010).

When I try to do an Establish Trust in BE it fails:

Calling Method 'SetServerTrust'

BEMSDK Method 'MediaServerAdvertisementUpdate'

BEMSDK Failure Code: E0009B91

The job failed with the following error: An attempt to connect to remote computer has failed. The remote computer is running a previous version of either the Agent for Windows or the Agent for Linux. Due to security enhancements related to SHA-2 certificate migration, the existing trust between the Backup Exec server and the remote computer is lost. You must upgrade the remote agent and then reestablish the trust. Refer to the link below for more information: http://entsupport.symantec.com/umi/V-312-2.

Regards.

0

Netbackup 5230 CIFS Share Permissions issue

$
0
0
I need a solution

We recently have had issues copying any software updates to the software share. I open the share via the CLISH - Manage>Software>Share Open. Then map a drive to \\nbappliance\incoming_patches using the appliance admin credentials. However, when trying to copy any packages to the location I get access denied. I've been able to do this in the past, then it suddenly stopped and I can't seem to figure out what changed. I've gone thru this past forum and it's something beyond this it seems - https://www.veritas.com/community/forums/cannot-pl...

Is it possible that permissions were altered somehow? If so, how can we correct them?

nbudr1_software_share_0.jpg

0

Backup Exec Jobs Stuck As Queued

$
0
0
I need a solution

I'm running Backup Exec 15 on Windows Server 2012. 

I have two tape drives that have been "orphaned" from my robotic library. I understand this is because I do not have licenses for them at this point in time. I am looking into doing so.

However, my Robotic library is now showing two offline tape drives.

The Alerts are giving me a Storage Error. 

Now when I go to perform any backup or restore job, it stays stuck in the "Active: queued" state. I understand this often happens when there is an alert to be taken care of. 

Is it possible that this Storage Error is responsible for my jobs staying as queued? 

0
1459799872

nbstl command

$
0
0
I need a solution

NBU master, media 7.6, client (file server) win OS 7.6 it contains around 8TB data. Backup, duplication then AIR duplication is not a issue here.original SLP: SLP_houstonmaster_5220_torontomaster.-U

backup                         stu_disk_houvnb Default_24x7_Win Expire After Copy
 duplication                   stu_houvnb5220 Default_24x7_Win Fixed
  replication to remote master *Remote*Master Default_24x7_Win Fixed
Above SLP has 3 different versions. Latest changes made and current version is 3. I want to modify old version of SLP so that backup images assosciated with it go to different NBU appliance on duplication step.

new steps should be . but i am unable to do via command line. command guide on clear on last step of AIR replication(different nbu domain).

backup                         stu_disk_houvnb Default_24x7_Win Expire After Copy
 duplication                   stu_houvnb5220new Default_24x7_Win Fixed
  replication to remote master *Remote*Master Default_24x7_Win Fixed

nbstl -modify_version version 2 ??????????????????

0

Initial Seeding for Remote using VMware Backup

$
0
0
I need a solution

Hello,

I am having some trouble trying to figure out a process of performing an initial seed. I've looked at a few articles but none specify what needs to be done for my current situation

https://www.veritas.com/support/en_US/article.TECH...

My environment has 15 remote sites and our datacenter. A few of these sites have data in the TB's. For DR purposes we are doing file level backups of VM's using the VMware options. Our server is on 7.7.2. 

Our process is each site has a media server that will make a local backup copy of our file server VM's, it will then duplicate the backup to our datacenter where our master is connected to a storage unit supplied by veritas. I'm trying to figure out how to perform an initial seed on the VM for duplication between media and master. The recommended way of seeding I'm not seeing how this will benifit us as it still requires going over the WAN pipe. Putting the "data" onto a usb drive would be great if I was just backing up data but since I'm backing up a VM I'm not sure what I would put on the USB to ship. I thought about creating a backup locally and then shipping the image and importing it at our datacenter but I'm not seeing any good articles on how that work. 

Would appreciate any help on trying to solve this. As mentioned we are on version 7.7.2

Thank you

0

LINUX NFS MOUNT BACKUP

$
0
0
I need a solution

Hi All,

I need a solution regarding NFS Backup as per  below mentioned environment details.

our customer is have some of there files on NFS mount point on LINUX server, on the same server they want to install NetBackup Master server and attached tape drive on the same server as well. 

I just want to if I can quote them NetBackup Standard server only will it work or do I need to add any other agents and options? 

thanks & regards

0

Why are the tapes marked overwriteable?

$
0
0
I need a solution

When I put in expired tapes to the tape machine, Backup Exec is marking all the tapes overwriteable and linking them together. I only have 6 tapes availible to write on to. I used to have only two overwriteable tapes. Why is Backup Exec making all the tapes overwriteable and how do i reset the tape to normal so it can be written on. 

We are running backup exec 2012 on windows server 2008 r2 standard. 

0

bpexpdate command rollback

$
0
0
I need a solution

Good day dear team,

I'm wondering if once you force-expire a tape with the command : "bpexpdate.exe -m A00000 -d 0"

theres a way to rollback and have its original timestamp back again ?

0

vxms log question

$
0
0
I need a solution

NBU 7.6.0.4 running on Windows Server 2008 R2. Master/Media on same server.

I have some questions regarding the NetBackup\logs\vxms directory:

1. Over the weekend the drive this directory is on ran out of space. The vxms directory was the largest consumer and to free up some space I moved about a week's worth of the olderst files in this directory off server. This directory looks like it's kept to about 30 days worth. Will the moving of these files cause problems for NetBackup when it goest to do it's pruning schedule?

2. What exactly is kept in this directory? The *provider.log and *core.log files are very large and I'm trying to understand what the different files in this directory are for. I've read through the NBU Wmware Admin Guide, but that is mainly configuration, not theory of operation.

3. The VXMS_VERBOSE is set to level 5. I'm not sure if this is the default setting, our NBU environment was configured by someone who has since retired. Does the verbosity need to be set to this level?

Thanks for any guidance. I'm relatively new to NetBackup as you can likely tell.

Kevin

0

NBDISCO on MASTER server

$
0
0
I need a solution

Whilst I enjoy OIP , I dont enjoy the nbdisco process swallowing resources. So my question is simple: can the nbdisco process on the master be disabled and if so is there a recommended method to permanently stop it from executing?

Solaris 10 Sparc / NB7601

Thanks in advance,Jim

0

Backup Exec 15 - S3 Bucket - Can you configure proxy settings for HTTP/HTTPS traffic?

$
0
0
I need a solution

We are stuck trying to connect a Backup Exec media server, located in a secure, non-internet connected network. The media server does have a HTTP/HTTPS proxy available to grant access to the internet but again, the server does not have general access to the internet.

We have configured the internet proxy settings for the operating system (Windows Server 2012 R2), however, the Backup Exec application does not seem to respect these settings. Nor does the application have any available settings to configure these proxy settings.

The LiveUpdate application, that comes along with Backup Exec to facilitate the updating process does have proxy settings that can be configured.

Are we missing something? Does Backup Exec have proxy settings that can be configured to assist with cases where the Media Server is located in a secure network?

0
1459869264

whats good method to do cloud long time archives

$
0
0
I need a solution

Hello,

I have setup a google cloud bucket for purpose of backing up monthly full backups.  I find however that I can't afford to do a full backup each month, as it takes too long with my currenty internet connection.

How (if possible) can I do a full backup one time, but then have only the changes updated to the cloud, but that change be like a snapshot in the cloud, so that I have a restore option of a full backup (snapshot).

Is this possible using backup exec and google cloud s3 buckets? (or any cloud options with backup exec) Or does backup exec not operate in this fashion?

Thank you

0

Backup fail because (48) client hostname could not be found

$
0
0
I need a solution

Hi All,

Backup for one windows client is failing agin and again. As i am new to netbackup, i am not able to resolve it. Kindly help me with this.

Backup is Sharepoint daily/weekly backup, where backup runs with File List/Path : Microsoft SharePoint Resources:\* & fires 4 other backup. One out of 4 is failing where rest all are completing successfully.

The one which is fail if File List/Path : Microsoft SharePoint Resources:\Global Settings (OMG-SPAPP-01)\

Status : 1: (48) client hostname could not be found 

Log :

04/04/2016 21:01:23 - Info nbjm (pid=10689) starting backup job (jobid=6789379) for client ALP-SPAPP-01-VM, policy PURE_SHAREPOINT_PROD, schedule Daily-Full
04/04/2016 21:01:23 - Info nbjm (pid=10689) requesting STANDARD_RESOURCE resources from RB for backup job (jobid=6789379, request id:{01EAB20C-FAA0-11E5-8423-00144F43C41C})
04/04/2016 21:01:23 - requesting resource ALP-NBUMED-01-DDSU
04/04/2016 21:01:23 - requesting resource nbumaster.NBU_CLIENT.MAXJOBS.ALP-SPAPP-01-VM
04/04/2016 21:01:23 - requesting resource nbumaster.NBU_POLICY.MAXJOBS.PURE_SHAREPOINT_PROD
04/04/2016 21:01:25 - granted resource  nbumaster.NBU_CLIENT.MAXJOBS.ALP-SPAPP-01-VM
04/04/2016 21:01:25 - granted resource  nbumaster.NBU_POLICY.MAXJOBS.PURE_SHAREPOINT_PROD
04/04/2016 21:01:25 - granted resource  MediaID=@aaaaK;DiskVolume=PureDiskVolume;DiskPool=ALP-DedupePool-01;Path=PureDiskVolume;StorageServer=ALP-NBUMED-01;MediaServer=ALP-NBUMED-01
04/04/2016 21:01:25 - granted resource  ALP-NBUMED-01-DDSU
04/04/2016 21:01:27 - estimated 0 kbytes needed
04/04/2016 21:01:27 - Info nbjm (pid=10689) started backup (backupid=ALP-SPAPP-01-VM_1459800086) job for client ALP-SPAPP-01-VM, policy PURE_SHAREPOINT_PROD, schedule Daily-Full on storage unit ALP-NBUMED-01-DDSU
04/04/2016 21:01:28 - started process bpbrm (pid=7580)
04/04/2016 21:01:30 - Info bpbrm (pid=7580) ALP-SPAPP-01-VM is the host to backup data from
04/04/2016 21:01:30 - Info bpbrm (pid=7580) reading file list from client
04/04/2016 21:01:30 - connecting
04/04/2016 21:01:33 - Error bpbrm (pid=7580) bpcd on OMG-SPAPP-01 exited with status 48: client hostname could not be found
04/04/2016 21:01:33 - Info bpbkar32 (pid=0) done. status: 48: client hostname could not be found
04/04/2016 21:01:33 - end writing
client hostname could not be found  (48)
 

0

SSR schedule

$
0
0
I need a solution

Hello all,

While scheduling a backup, I wonder which day SSR considers as first day of a weel?

Thanks for replying

Schedule.JPG

0

Restore RMAN

$
0
0
I need a solution

Hi,

I'm trying to restore an Oracle database backup made by Backup Exec 15 to a different server from the backup source database, but I'm receiving a generic error.
I tryied two ways to restore:
     a) Configure restore on Backup Exec Server
     b) Execute commands directly on RMAN

but the error is the same.

Alert.log has only the startup database information, nothing about the restore.

Some informations:

* init and pwd files are in ORACLE_HOME\database
** Oracle 11.2.0.4 Standard
*** Windows Firewall is disabled

RMAN> startup nomount

connected to target database (not started)
Oracle instance started

RMAN>  run {
2> ALLOCATE CHANNEL ch0 TYPE 'SBT_TAPE';
3> RESTORE CONTROLFILE FROM 'BE_4kr22919_1_1';
4> }

allocated channel: ch0
channel ch0: SID=298 device type=SBT_TAPE
channel ch0: Symantec/BackupExec/1.1.0

Starting restore at 05-APR-16

released channel: ch0
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 04/05/2016 09:23:48
ORA-27191: sbtinfo2 returned error

0

Changing Database Maintenance Log Path?

$
0
0
I need a solution

Does anybody know if it is possible to rename or even change the path of of the database maintenance log? I do not want to change the path or where the backup logs go though.

Is it even possible to make it so that no actual log is created for it?

0

Any way to allow 1 BE media server host more then 1 deduplication storage, or merge muti deduplication storage to one?

$
0
0
I need a solution

We had 1 master site + 4 satellite sites.

Each satellite site is connecting to master site thought 6Mb MPLS connection as show on the below picture.

Site.jpg

We would like to centralize the backup on master site by using deduplication replication (throught Centralize Admin Option)

However, after calculation, we found we need at least 3 months to do the initial sync from a satellite site to master site thought network.

To avoid the long initial sync, we are going to manual copy the data as below:

1. On satellite site, create deduplication storage, and do full backup to the satellite site.

2. Stop the deduplicate storage, copy the deduplicate storage files to the tapes.

3. Offsite the tapes to master site, restore the deduplicate storage files.

4. Create new deduplicate storage on master site by importing the deduplicate storage files.

5. Enable the deduplication storage on both master site and satellite site, enable deduplication replication (throught Centralize Admin Option)

The above method is easy to perform, if we only have 1 master site + 1 satellite site.

However, we had 4 satellite sites, we cannot did it for 4 times as 1 Backup Exec media server could only host 1 deduplication storage.

Would you please suggest: 

- Any way to host 4 x deduplicate storage on 1 Backup Exec media server, so they each deduplication storage could only serve their own satellite site?

Or....

Any way to merge 4 x "offsited" deduplication storage to 1 deduplication storage, so the deduplication storage on master site could serve all 4 x satellite sites?

Thank you very much!

0

netbackup configuration in MS failover cluster

$
0
0
I need a solution

Hi,

pl. advice if i am wrong.  environment detail listed below windows2012, netbackup 7.7.2,

Name= host A, 192.168.99.100

Name= Host B, 192.168.99.101

NFS drive: D:\  

Qurom disk: E:\

under a MS failover cluster

Virtual Name= HostAB, 192.168.99.102

Netbackup installation:

select in a cluster mode

Virtual Name= HostAB, (physical host name under an additional master and media server HostA, HostB)

next screen, cluster setting  create a new cluster group (virtual ip: 192.168.99.103, Virtual hostname: HostD, shared path disk, public network )

click on "cluster configuration" button

cluster configuration successful.

instalation done.

when i open a java admin  console, certificate error "the host which you are trying to connect does bot have a netbackup security certificate installed. the security certificate is manadatory"

i tried to run the command on windows shell  pathinstall/veritas/netbackup/bin/admincmd/bpnbaz setupat 

want to run y/n: y

failed.

can some one guide me where i am wrong ??

0

How to backup huge SAP Database?

$
0
0
I need a solution

Hello,

I am looking for new solution for backup big SAP DB 17 TB / 300 GB changes/per month

For now we are using customized scripts using flashcopy from DS8000 (source for this DB). Flash is being mounted via media server and goes to tapes via SAN using RMAN backup (to tapes).

We have upgraded recently NBU to 7.7.2 and we bought 4 x 5330 Appliance as media servers 4 x 200 TB

What backup solution do you suggest?

I have read about Oracle Copilot, what do you think?

There is also Client snapshot for SAP

Or maybe use standard RMAN backup via Oracle Int Policy ?

0

Move Deduplication Folder to New Server

$
0
0
I need a solution

I have an existing deduplication folder on one server, and I am trying to move it to a new 2012 Windows server with more storage. Copying and pasting the folder from the old server to the new did not work as expected.

I want to be able to access the data currently in the deduplication folder on the new server. How would I do this?

0
Viewing all 1524 articles
Browse latest View live