I'm using the S3FS library during a Migrate operation - we're copying a file from a remote server to S3.
The relevant call is
try {
$file = file_save_data($request->data, $destination, $flag);
}
catch (Exception $e) {
//catch my exception
}
Where the following are set:
$request->data is the bits of the file I want to save.
$destination is a stream wrapper like s3://fiename.pdf
$flag is FILE_EXISTS_RENAME
An entry is created in the s3fs_file table for my file, and the file itself appears on S3 moments later, but during execution of the file_save_data(), an exception is thrown, the message is as follows:
Error executing "GetObject" on "https://bucketname.s3.amazonaws.com/s3fs-public/filename.pdf"; AWS HTTP error:
array_shift() expects parameter 1 to be array, null given
File /sites/all/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php, line 101
File /docroot/sites/all/vendor/aws/aws-sdk-php/src/S3/StreamWrapper.php, line 738
Later in the same script, I can call s3Client->headObject() on the same stream wrapper and it works.
Right now I am coding around this problem and I'd rather not have to. I can pull the uri out of the s3fs_file table and manually create a file object and file ID to add to my node.
Is there some operation that doesn't happen during file_save_data()? I have someone checking on the Access Control List of the bucket, and they say everything is fine, everyone has access to GetObject, and I can run HeadObject a few milliseconds later.
I have updated to the latest 3.x library and that didn't change anything. I am already on the latest 7.x-3.x-dev branch.