This article describes how to set up Vidispine API as-a-Service instace to log to an S3 bucket through the Vidinet Dashboard, including what permission a customer needs to set on their S3 bucket to allow the Vidispine instance to write its logs to the bucket. It also shows how to use LogStash to make use of the logs. 

Set S3 Policy

First things first. Before setting up the log destination in Vidinet, you need to change your bucket policy to allow Vidispine to write to your bucket. Open up the AWS console and navigate to Amazon S3 and select the bucket you want to use. The policy is set under Permissions->Bucket Policy (see picture)



In the bucket policy editor, add the following policy for your bucket.  


{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "Statement3",
           "Effect": "Allow",
           "Principal": {
               "AWS": "arn:aws:iam::823635665685:role/ontap-lab-prod-lambda-export-logs"
           },
           "Action": "s3:PutObject",
           "Resource": "arn:aws:s3:::{your-s3-bucket}/*"
       }
   ]
}



Configure logging in Vidinet Dashboard

The logging is configured from the Vidinet Dashboard. Find the panel named "Log Destination" (see below). 



Select configure and add you bucket name and a folder where you want the logs to be stored. Vidispine will create the folder if it does not exist. The logs will be stored in a folder hierarchy with the format yyyy/MM/dd/<logfiles>. 


Note: Make sure you use different buckets and/or folders for your different Vidispine API instances to avoid mixing up logs in the log folders. 


When you have created the log destination it will be visible in the panel, allowing you to edit it, or remove it at any time. 


Logstash

Once this bucket is configured from the Dashboard/Cluster Manager, the logs will start coming into the bucket with the selected prefix. There will be a lot of small log files, which you can for example pick up and make use of with LogStash. The following config can be used to pick up all new files and save them to a local file: server.log, it will also delete processed files to keep your bucket clean. Note that you can use both temporary credentials with a session token or static credentials:


input {
  s3 {
    "access_key_id" => "AKIA..."
    "secret_access_key" => "keVM..."
    "bucket" => "acme-media-solutions"
    "backup_to_dir" => "/tmp/processed"
    "region" => "eu-west-1"
    "prefix" => "vs-logs/"
    "interval" => 5
    "delete" => true
    "additional_settings" => {
      "follow_redirects" => false
    }
  }
}
 
filter {
  grok {
    match => { "message" => "\[%{GREEDYDATA:instance}\] %{IP:client} - - \[%{GREEDYDATA:timestamp}\] %{GREEDYDATA:text}" }
  }
  mutate {
    strip => "message"
  } 
}
 
output {
  stdout { }
  file {
    path => "server.log"
    codec => line { format => "%{message}"}
  }
}


Start LogStash and you can investigate your logs from there:


$ logstash -f config.json