The sorting on the suite and member screens is currently not working on the current version (4.4.1 - R04M041191016) it will be fixed in the next release. the filters do work though so use the filters to find what you need in long lists.
in order to run a job after pre-reqs have finished, and also run the job at 9:30 (regardless of the current pre-reqs) you would have to add this new job (JOB_Scheduled) as a pre-req with an 'OR'. If you were just to add a time, as noted, the job would not run at all if the current pre-req jobs were not to complete before that time.
Case #1. What happens if a member is already running when the backup kicks off? is that ok are you cannot have that either, in which case is the non run window more like 4am-7am, or even 3am-7am to be safe? My thinking here is you can use a schedule queue here, just holding the queue the member jobs run in during the backup. You just need to figure out the time to hold the queue.
Case #2. You should be able to use a date list here, and place that date list into the suites special instances, then use the skip special instance on the member job. use the built in help to get an understanding of all 3 of those settings, then we can answer any more questions from there.
the general setup for this should be fine. The key will be is how or when does that variable get set? Is it only for a certain user or is it setup as a global variable? If running $curr_date-14 gives you the correct date in the shell, then using that in Skybot\Automate Schedule should also work fine. The first thing is to determine if 'see' the variable first though, so you can setup a job to do a simple 'echo $curr_date' and see if you get the results you were expecting. You may have to call whatever script creates this variable as the first command in the job though.
Sorry you are having an issue. As noted, when you go to any screen, the first record is auto-selected. However, since you have to right click on a job to either run it or hold it, they have to actually be placing the mouse on that first job to get those results, as in there is no keyboard shortcut that could be doing this. Perhaps they are thinking the highlighted job being 'highlighted' is the one they need?
Unfortunately, there is no way to remove that selection of top job at this time.
There are 3 ways you can handle this, each of them is just using an OS specific command for the delay,
So first option is to add a 30 minute delay to the first job as the second command in the job. You can use something generic like
ping -n 1800 127.0.0.1 > NULL (the number after the -n is how many seconds basically to do the ping, each one is about a second) or if linux you can use the sleep 1800 command
either option the main command runs and completes, then this command runs and will complete 30 minutes later. The issue might be that the second job will appear to have run as soon as the first job completed, but it is 30 minutes after the first command of the first job actually completed.
Second option is basically like the first one, but you place the delay on the second job as the first command. again, the job will run right after the first one has completed, but then it 'waits' for 30 minutes before actually running the command you want.
the third option is to add a job in between the jobs to do the wait. this will make the history of job 1 and job 2 have the 30 minutes between them, but of course you are now adding another job to the mix. so now job 1 completes, this new wait job is reactive off that, and job two is now reactive off this new delay job.
Let us know if you have any questions.
Hello Jill, things might have changed on server 2016 (this was probably last setup on 2012 (2008r2 for sure) but they key here is the shared storage. the storage should only be available per a single node at a time. So say you have a systems called sys1 and sys2 and those are setup in a cluster with shared storage set as the z:\ drive for both. So you would have the storage pointing to sys1, then install the agent into z:\automate once that is installed and talking to the Automate host, you would stop the agent, and change the services to manually start. then via the MS cluster configuration, you would move the storage over to sys2 and run the same process (installing to z:\automate) and once you see it connect (so you should see a sys1 and a sys2 agent (with sys1 being inactive) then you can stop the services and change them to manually start.
now in the cluster manager, you would create a new resource (under the cluster group that has the shared storage) and add the agent service name (Automate Schedule Agent Server) and then for the Automate Schedule on 64 bit system, the registry will be
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Help Systems\Automate Schedule
now we have put into place other ways to do this that might be better for you. You can create a 'Preferred Agent Group' or You can setup each cluster system to have the same name to Automate Schedule, and use the cluster agent name option on the agent (so whichever node is up and running is the one jobs run on)
Let us know if you have any questions.
Releasing a job does not mean it will run, it just means it is now allowed to run. so just releasing from being held it will not make it run, it still has to have something in place (a job schedule, a Prerequisites) in order for it to actually kick off (or kick off any other job)
For now the only thing you could do is create a batch file or shell script to connect to the ftp server to run mdelete command. You would just call this as the first command in the Automate job, then the second command does the FTP transfer.
I can put in an enhancement request to add a delete option before a PUSH command. If we would ever add the option, we would contact you to let you know the version it was in.
There would not be a simple way to accomplish what you want, however, when you update from Skybot to Automate, symlinks should be created so if you had hardcoded anything to point to /opt/skybot, they would now just link to /opt/automate-schedule. the update will remove the skybot user and create a new automate user, there is no was to stop that process.
Now it might be possible that you could install Automate Schedule, let it create the automate user and install in /opt/automate-schedule. You could then manually create the skybot user, rename the dir /opt/automate-schedule to /opt/skybot, change all ownership (of all files in /opt/skybot) to the skybot user, then edit some scripts to change lines that point to /opt/skybot (not sure how many there are off-hand) then hopefully it still all functions ok (it's not something we would test) Of course you would have to do this process for all future updates as the install code will still think you are on a Skybot version. Then the only issue is that we will always assume the paths to be /opt/automate-schedule with a user automate on the system for any support queries we might send.