Quantcast
Channel: Symantec Connect - Backup and Recovery
Viewing all 1524 articles
Browse latest View live

To disable SSLv3 and TLS1.0 support from Backup Exec

$
0
0

Hi, my project audit 2015 got finding regarding SSLv3 and TLS1.0, Auditor advice to disable both as they are not secure. Understand at the moment any version of the Backup Exec also need SSLv3 and TLS1.0 so we have to seek exception on this.  As mentioned by the auditor, SSLv3 and TLS1.0 got vulnerbility and not safe, please consider to remove the support for both protocol.


can i find the file owner

$
0
0
I need a solution

NBU 7.6.0.3

when NBU encounters a locked file or access denied because someone have the file opened during backup, is there a way for NBU to know the identity of the account who caused the access denied?

0

cbt enabled by Netbackup?

$
0
0
I need a solution

Hi all,

I have a question about Accelerator and cbt option on vm machine.

This option is disabled, Can Netbackup enabled automatically if I have accelerator on vmware policy? And in this cas NETBACKUP poweroff vm?

Thanks a lot

0

After upgrade unable to see all details in the nbu console

$
0
0
I need a solution

Hello

We have recently upgraded our master nbu7.7 to 7.7.1 and after that we are not able to see any jobs under activtiy monitor or any other details and it gives error.

Please check screenshot attached.

We are currently managing backups using java console on other workstation because master server's console dont not work.

Please suggest.

0

NBvcPlugin (Thick Client Version)

$
0
0
I do not need a solution (just sharing information)

Just wanted to share as I have some issues during this install where the communication seems to be off and you then cannot search for clients or do restores via the plugin. It seems that the go to answer if you open a case is to completey remove and reinstall the plugin which can be time consuming. I wanted to share that I have had luck with editing a couple files on the plugin server itself to adjust the vCenter server name information or even the master server name as well. You can change it to be the short name or FQDN as needed and this is much quicker than reinstalling from the .ova and so on.

On the plugin server built via the .ova:

/opt/SYMCnbvcPlugin/data/servers > vcenterlist (Change vCenter:servername)

                                                         > serverlist (Contains the master server name)

/etc > nbvcplugin.env (Change export SERVER_NAME=servername)

You will also find the port information in these as well.

I have also had luck with adjusting OS related name files by adjusting the files below which include DNS search string information...

/etc/hosts

/etc/hostname (Ensure hostname is accurate as your master server and vCenter servers resolve the same name)

/etc/resolv.conf (Edit search string and DNS servers as needed)

I am no expert, but being able to adjust these settings has allowed me to get the plugin working on 2 different occassions and saved me time where I did not have to completely reinstall over and over.

Also in the interest of making it a bit easier to work on as I do not enjoy working via a VMware console...you can do the following...

On the plugin server:

vi /etc/ssh/sshd_config

Find

PermitRootLogin

Change no to yes and save

service sshd status - You will see that its not running

service ssh start - This will start the service

You can now SSH into the server and work on it as needed. If you reboot it the SSH service doesn't start so you have to start it again if needed.

0

What is the most stable 2013 R2 version SSRS?

$
0
0
I do not need a solution (just sharing information)

We have a bunch of various versions of SSRS installed, 2013 11.0.2. 2013 R2 and others.  I am want to get our environment to the most stable, latest version for our Windows 2003 and above environment.  Is there one anyone can recomend?  Thanks in advance

0

Help needed: I need to re-download my SSR 2011 software

$
0
0
I need a solution

I have a server that I had to rebuild. The server had Symantec System Recovery 2011 installed. My client does not know where their software is located. Is there a way to download the software from the web. We have the license key but not the install disk. Any help would be much appreciated!

0

Phase 2 Import fails - status 85 and (191)

$
0
0
I need a solution

I hope someone can please help me with this problem?

I have been asked to restore data from another NBU domain, the data is on one tape. I have successfully completed the Phase 1 import, then started the Phase 2 import but it fails with these messages:

12/31/2015 14:56:49 - begin reading
12/31/2015 15:11:50 - Error bptm (pid=1708) cannot read image from media id GAW050, drive index 0, err = 23
12/31/2015 15:11:50 - Info bptm (pid=1708) EXITING with status 85 <----------
12/31/2015 15:11:50 - Error bpimport (pid=4236) Import of policy BDS-ACP-Windows-VMs, schedule Weekly-Full (ACPBDER01_1448737342) failed, media read error.
12/31/2015 15:11:50 - Error bpimport (pid=4236) Status = no images were successfully processed.
12/31/2015 15:11:51 - end Import; elapsed time 0:15:46
no images were successfully processed  (191)

I have checked the block size of the tape and its 262144 - i have then created a SIZE_DATA_BUFFERS file with the value 262144 in  location \Program Files\Veritas\NetBackup\db\config, this hasnt changed the error messages?

I have enabled bptm logging and an extract of the log showing the error is below::

15:10:26.068 [964.4824] <2> report_drives: FILE = D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000
15:10:26.068 [964.4824] <2> main: Sending [EXIT STATUS 0] to NBJM
15:10:26.068 [964.4824] <2> bptm: EXITING with status 0 <----------
15:11:50.324 [1708.2660] <2> openTpreqFile: tpreq_file: D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000, serial_num: 1068010079
15:11:50.324 [1708.2660] <2> get_drive_path: SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, pnp_path \\?\scsi#sequential&ven_ibm&prod_ult3580-hh5#5&1a5f4c65&0&000000#{53f5630b-b6bf-11d0-94f2-00a0c91efb8b}
15:11:50.324 [1708.2660] <2> check_serial_num: serial number match for drive with SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, drive serial number 1068010079, expected serial number 1068010079
15:11:50.387 [1708.2660] <2> init_tape: \\.\Tape0 (SCSI coordinates {5,0,0,0}) configured with blocksize 0
15:11:50.402 [1708.2660] <2> init_tape: \\.\Tape0 (SCSI coordinates {5,0,0,0}) has compression enabled
15:11:50.402 [1708.2660] <2> io_open: SCSI RESERVE (D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000)
15:11:50.402 [1708.2660] <2> openTpreqFile: tpreq_file: D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000, serial_num: 1068010079
15:11:50.402 [1708.2660] <2> get_drive_path: SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, pnp_path \\?\scsi#sequential&ven_ibm&prod_ult3580-hh5#5&1a5f4c65&0&000000#{53f5630b-b6bf-11d0-94f2-00a0c91efb8b}
15:11:50.402 [1708.2660] <2> check_serial_num: serial number match for drive with SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, drive serial number 1068010079, expected serial number 1068010079
15:11:50.433 [1708.2660] <2> init_tape: \\.\Tape0 (SCSI coordinates {5,0,0,0}) configured with blocksize 0
15:11:50.433 [1708.2660] <2> io_open: file D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000 successfully opened (mode 0)
15:11:50.433 [1708.2660] <2> tape_error_rec: locating to absolute block number 3 for error recovery
15:11:50.433 [1708.2660] <2> tape_error_rec: absolute block position after read position is 3
15:11:50.449 [1708.2660] <2> read_data: ReadFile returned FALSE, Data error (cyclic redundancy check). (23);bytes = 0
15:11:50.449 [1708.2660] <2> read_data: attempting read error recovery, err = 23
15:11:50.449 [1708.2660] <2> tape_error_rec: error recovery to block 3 requested
15:11:50.449 [1708.2660] <16> read_data: cannot read image from media id GAW050, drive index 0, err = 23
15:11:50.449 [1708.2660] <2> send_MDS_msg: DEVICE_STATUS 1 28 netbackup GAW050 4000022 IBM.ULT3580-HH5.000 2000002 READ_ERROR 0 0
15:11:50.511 [1708.2660] <2> log_media_error: successfully wrote to error file - 12/31/15 15:11:50 GAW050 0 READ_ERROR IBM.ULT3580-HH5.000
15:11:50.511 [1708.2660] <2> notify: executing - D:\Program Files\Veritas\NetBackup\bin\restore_notify.cmd bptm ACPBDER01_1448737342 import
15:11:50.808 [1708.2660] <2> check_error_history: just tpunmount: called from bptm line 13238, EXIT_Status = 85
15:11:50.808 [1708.2660] <2> io_close: closing D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000, from bptm.c.16674
15:11:50.808 [1708.2660] <2> openTpreqFile: tpreq_file: D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000, serial_num: 1068010079
15:11:50.808 [1708.2660] <2> get_drive_path: SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, pnp_path \\?\scsi#sequential&ven_ibm&prod_ult3580-hh5#5&1a5f4c65&0&000000#{53f5630b-b6bf-11d0-94f2-00a0c91efb8b}
15:11:50.808 [1708.2660] <2> check_serial_num: serial number match for drive with SCSI coordinates {5,0,0,0}, dos_path \\.\Tape0, drive serial number 1068010079, expected serial number 1068010079
15:11:50.823 [1708.2660] <2> init_tape: \\.\Tape0 (SCSI coordinates {5,0,0,0}) configured with blocksize 0
15:11:50.839 [1708.2660] <2> init_tape: \\.\Tape0 (SCSI coordinates {5,0,0,0}) has compression enabled
15:11:50.839 [1708.2660] <2> io_open: SCSI RESERVE
15:11:50.839 [1708.2660] <2> io_open: file D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000 successfully opened (mode 2)
15:11:50.839 [1708.2660] <2> process_tapealert: TapeAlert returned 0x00000000 0x00000000 (from tpunmount)
15:11:50.839 [1708.2660] <2> io_close: closing D:\Program Files\Veritas\NetBackup\db\media\tpreq\drive_IBM.ULT3580-HH5.000, from bptm.c.16710
15:11:50.839 [1708.2660] <2> drivename_write: Called with mode 1
15:11:50.839 [1708.2660] <2> drivename_unlock: unlocked
15:11:50.839 [1708.2660] <2> drivename_checklock: Called
15:11:50.839 [1708.2660] <2> drivename_lock: lock established
15:11:50.839 [1708.2660] <2> drivename_unlock: unlocked
15:11:50.839 [1708.2660] <2> drivename_close: Called for file IBM.ULT3580-HH5.000
15:11:50.839 [1708.2660] <2> tpunmount: NOP: MEDIA_DONE 0 11 0 GAW050 4000022 180 {F97B6505-0125-4B0A-9BCF-51A0F2120651}
15:11:50.839 [1708.2660] <2> bptm: Calling tpunmount for media GAW050
15:11:50.839 [1708.2660] <2> send_MDS_msg: MEDIA_DONE 0 11 0 GAW050 4000022 180 {F97B6505-0125-4B0A-9BCF-51A0F2120651}
15:11:50.839 [1708.2660] <2> packageBptmResourceDoneMsg: msg (MEDIA_DONE 0 11 0 GAW050 4000022 180 {F97B6505-0125-4B0A-9BCF-51A0F2120651})
15:11:50.839 [1708.2660] <2> packageBptmResourceDoneMsg: keyword MEDIA_DONE version 0 jobid 11 copyNum 0 mediaId GAW050 mediaKey 4000022 unloadDelay 180 allocId {F97B6505-0125-4B0A-9BCF-51A0F2120651}
15:11:50.839 [1708.2660] <2> packageBptmResourceDoneMsg: returns 0
15:11:50.839 [1708.2660] <2> JobInst::sendIrmMsg: returning
15:11:50.839 [1708.2660] <2> bptm: EXITING with status 85 <----------

Can anyone suggest why i cant read this tape?

Thanks

0

I would like to check if backup job (ms/sap) finished correctly using Windows Event

$
0
0
I need a solution

Good Day all, and Happy New Year!

  while a bit tired of eating for the new year's celebration, i was thinking how to improve the overview of a Netbackup solution for my customers.
One of the thing i would check is the completion of all the backup for all the window agents we have, using both MS-WINDOWS backup (full/INC) and SAP/ORACLE ones.
I run a Nagios, which already checks for critical event on every agent, but i would like to add one or more checks for the completion of those backups.
I was not able to find any relevant events after a backup finished successfully or even wrongly.
The alternative way would be to use the linux master server, but for the moment i'm trying to figure out how to do it using window agents.
Someone of you already did something similar?
Could you please inform me if NBU agent write some events (which one for which case) for backup status?
Thank you.

Kind Regards,

Michele

0

moving backup images from disk to onther

$
0
0
I need a solution

for example one of the vtl disk get full and backup data is impartant is it possible to move the backup images from one vtl disk to onther vtl disk via.If any one knows plese tell me procedure to move the backup images

0

RMAN backups failing.

$
0
0
I need a solution

Netbackup logs and acticity monitor logs show backup as successful, though data written is very less..few kbs.. alert generated as backup failed :

RMAN-03009: failure of backup command on t2 channel at 01/02/2016 06:10:48

ORA-27192: skgfcls: sbtclose2 returned error - failed to close file

ORA-19511: Error received from media manager layer, error text:

   VxBSAEndTxn: Failed with error:

0

I needed individual technotes for each reasons for which vmware snapshot/backups failing

$
0
0
I need a solution

Hi All,

I need individual technotes for each vmware snapshot/backups failure, with resolutions. If you could help  and  provide me, it would be very much helpful.

0

Grandfather-Father-Son Tape Rotation

$
0
0
I need a solution

ever since i've started with backup, i've used this GFS tape rotation scheme (more info on https://www.backupassist.com/education/bsg2.html) but that was during the time when one can say, this backup goes to this physical tape/s.

now with tape libraries and all, one cannot control which tapes will be used for certain backups. only a set of tapes using pools.

so i have implemented the GFS tape rotation by creating a Daily, Weekly, and Monthly pool. so far it works as the backup policies i updated uses the correct pool depending if it's Daily, Weekly, or Monthly.

now my question is, how to rotate the tapes? for example, from the Daily pool, this consists of several tapes how would i know which ones was used for Monday or Tuesday or Wednesday?

or should i just leave the Daily pool tapes in the tape library? but what about the weekly pool tapes? do i also leave them in the tape library until they expire and gets reused?

or is my tape rotation outdated and not applicable to today's backup strategy?

EDIT: i will use the "Data Expiration" column to know when to insert back one of the daily tapes. only if there is a notification system from NBU to tell you when life will be a piece of cake.

0

(Probably silly) question about inserting blank tapes into library and being assigned wrong hcart type

$
0
0
I need a solution

Hi guys,

I'm totally new in the backup to tape world.

I am following a SOP that my predecessor left for the backup system. I loaded new tapes into the library, and my backups are not starting (error 96) because no media is available. After loading the new tapes I run an inventory and then assigned the correct volume pool.

But I can see that the previous backup jobs used hcart tapes, and the ones I loaded are labeled as hcart2 when I go to the robots section and check the list of tapes in the robot.

Could you help? What am I doing wrong?

Thanks!

0

You do not have permission to read the contents of the log folder.


how to expire tapes in catalog?

$
0
0
I need a solution

NBU 7.6.0.3

in Catalog/Search, if i list all my tape for the year 2014 it doesn't show any "Data Expiration" column. just a "Date" column.

how does one first check the expiration date of a tape, in catalog, before actually expiring it manually?

0
1451913707

How to use SSR 2013 R2 SP3 with PowerShell ?

$
0
0
I need a solution

Hello,

I want to use symantec system recovery 2013 R2 SP3 with PowerShell.

I use the following Management Console : SSR 2013 R2 SP3 MS.

I saw the script on ...\SSR 2013 R2 SP3\Sym_System_Recovery_2013R2_11.1.3.55088_Multilingual_Product\Docs\Automation\PowershellScripts but how is it possible to run successfully ?

I want use the scripts to running the backups failed.

Thanks for your response.

Samir

0

Configure media server with two IP addresses?

$
0
0
I need a solution

Hi,

I'm configuring a Windows 2008 R2 media server running NB7.5.0.6 to write to tape from Data Domain storage. It has two network connections to different switches, as each switch can only offer a max throughput of 1gb, so I want to utilise both to maxmimse throughput.

How do I set this up? Do I create a second hostname and then create two connections to data domain using nbdevconfig and tpconfig? For example my if servername is server1 and Data Domain hostname is dd01 my plan so far is: 

Configure hostname server1_a with ip 10.0.0.1

Configure hostname server1_b with ip 10.0.0.2

Create nbdevconfig entries on the media server for both hostnames:
nbdevconfig -creatests -stype DataDomain -storage_server dd01 -media_server server1_a
nbdevconfig -creatests -stype DataDomain -storage_server dd01 -media_server server1_b

Then after adding the tape library to Netbackup using the Configure Storage Devices create two storage units for it, one with server1_a as the media server and one with server1_b, e.g SL150_Library_Server1_A and SL150_Library_Server1_B

Finally configure half of my SLPs for tape writes to use SL150_Library_Server1_A, and the other half to use SL150_Library_Server1_B

As far as i can see this would work, the only clumsy aspect is that there's no automatic load-balancing, we would have to manually assign the tape library storage units for each SLP so that both are used equally.

Or is there a better way?

Thanks!

0

Netbackup Consolidate Retry Attempt

$
0
0
I need a solution

Hello All,

As described at link below Netbackup try remove/consolidade snapshots 5 times before give up.

https://www.veritas.com/community/blogs/nuts-and-b...

I'm with problem with this parameter I would llike to know where I can find and change it.

Do you have any idea where I can find it?

Netbackup Version 7.6 (Linux Red Hat)

0

BackUP Mount Points with Netbackup

$
0
0
I need a solution

NetBackup and Mount points

We have a Netbackup7.6.04 environment and need to set up a new policy for a windows 2012 R2 cluster.

We have created 4 mounts points on the D volume with
\\server\D$\StoreEasyData1
\\server\D$\StoreEasyData2
\\server\D$\StoreEasyData3
\\server\D$\StoreEasyData4

Each volume will grow to a maximum of 2TB

Do I just need to create a Policy Type MS-windows with a selection List of the mount points \\server\D$\StoreEasyData1 ???
Can I add all the 4 mounts points in the same policy ?

0
Viewing all 1524 articles
Browse latest View live