Class | RightAws::S3Interface |
In: |
lib/s3/right_s3_interface.rb
|
Parent: | RightAwsBase |
USE_100_CONTINUE_PUT_SIZE | = | 1_000_000 |
DEFAULT_HOST | = | 's3.amazonaws.com' |
DEFAULT_PORT | = | 443 |
DEFAULT_PROTOCOL | = | 'https' |
REQUEST_TTL | = | 30 |
DEFAULT_EXPIRES_AFTER | = | 1 * 24 * 60 * 60 |
ONE_YEAR_IN_SECONDS | = | 365 * 24 * 60 * 60 |
AMAZON_HEADER_PREFIX | = | 'x-amz-' |
AMAZON_METADATA_PREFIX | = | 'x-amz-meta-' |
Creates new RightS3 instance.
s3 = RightAws::S3Interface.new('1E3GDYEOGFJPIT7XXXXXX','hgTHt68JY07JKUY08ftHYtERkjgtfERn57XXXXXX', {:multi_thread => true, :logger => Logger.new('/tmp/x.log')}) #=> #<RightS3:0xb7b3c27c>
Params is a hash:
{:server => 's3.amazonaws.com' # Amazon service host: 's3.amazonaws.com'(default) :port => 443 # Amazon service port: 80 or 443(default) :protocol => 'https' # Amazon service protocol: 'http' or 'https'(default) :multi_thread => true|false # Multi-threaded (connection per each thread): true or false(default) :logger => Logger Object} # Logger instance: logs to STDOUT if omitted }
Retrieve bucket location
s3.create_bucket('my-awesome-bucket-us') #=> true puts s3.bucket_location('my-awesome-bucket-us') #=> '' (Amazon's default value assumed) s3.create_bucket('my-awesome-bucket-eu', :location => :eu) #=> true puts s3.bucket_location('my-awesome-bucket-eu') #=> 'EU'
Removes all keys from bucket. Returns true or an exception.
s3.clear_bucket('my_awesome_bucket') #=> true
Copy an object.
directive: :copy - copy meta-headers from source (default value) :replace - replace meta-headers by passed ones # copy a key with meta-headers s3.copy('b1', 'key1', 'b1', 'key1_copy') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:25:22.000Z"} # copy a key, overwrite meta-headers s3.copy('b1', 'key2', 'b1', 'key2_copy', :replace, 'x-amz-meta-family'=>'Woho555!') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:26:22.000Z"}
see: docs.amazonwebservices.com/AmazonS3/2006-03-01/UsingCopyingObjects.html
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/RESTObjectCOPY.html
Creates new bucket. Returns true or an exception.
# create a bucket at American server s3.create_bucket('my-awesome-bucket-us') #=> true # create a bucket at European server s3.create_bucket('my-awesome-bucket-eu', :location => :eu) #=> true
Deletes key. Returns true or an exception.
s3.delete('my_awesome_bucket', 'log/curent/1.log') #=> true
Deletes new bucket. Bucket must be empty! Returns true or an exception.
s3.delete_bucket('my_awesome_bucket') #=> true
See also: force_delete_bucket method
Deletes all keys where the ‘folder_key’ may be assumed as ‘folder’ name. Returns an array of string keys that have been deleted.
s3.list_bucket('my_awesome_bucket').map{|key_data| key_data[:key]} #=> ['test','test/2/34','test/3','test1','test1/logs'] s3.delete_folder('my_awesome_bucket','test') #=> ['test','test/2/34','test/3']
Deletes all keys in bucket then deletes bucket. Returns true or an exception.
s3.force_delete_bucket('my_awesome_bucket')
Retrieves object data from Amazon. Returns a hash or an exception.
s3.get('my_awesome_bucket', 'log/curent/1.log') #=> {:object => "Ola-la!", :headers => {"last-modified" => "Wed, 23 May 2007 09:08:04 GMT", "content-type" => "", "etag" => "\"000000000096f4ee74bc4596443ef2a4\"", "date" => "Wed, 23 May 2007 09:08:03 GMT", "x-amz-id-2" => "ZZZZZZZZZZZZZZZZZZZZ1HJXZoehfrS4QxcxTdNGldR7w/FVqblP50fU8cuIMLiu", "x-amz-meta-family" => "Woho556!", "x-amz-request-id" => "0000000C246D770C", "server" => "AmazonS3", "content-length" => "7"}}
If a block is provided, yields incrementally to the block as the response is read. For large responses, this function is ideal as the response can be ‘streamed’. The hash containing header fields is still returned. Example: foo = File.new(’./chunder.txt’, File::CREAT|File::RDWR) rhdr = s3.get(‘aws-test’, ‘Cent5V1_7_1.img.part.00’) do |chunk|
foo.write(chunk)
end foo.close
Retieves the ACL (access control policy) for a bucket or object. Returns a hash of headers and xml doc with ACL data. See: docs.amazonwebservices.com/AmazonS3/2006-03-01/RESTAccessPolicy.html.
s3.get_acl('my_awesome_bucket', 'log/curent/1.log') #=> {:headers => {"x-amz-id-2"=>"B3BdDMDUz+phFF2mGBH04E46ZD4Qb9HF5PoPHqDRWBv+NVGeA3TOQ3BkVvPBjgxX", "content-type"=>"application/xml;charset=ISO-8859-1", "date"=>"Wed, 23 May 2007 09:40:16 GMT", "x-amz-request-id"=>"B183FA7AB5FBB4DD", "server"=>"AmazonS3", "transfer-encoding"=>"chunked"}, :object => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<AccessControlPolicy xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner> <ID>16144ab2929314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a</ID><DisplayName>root</DisplayName></Owner> <AccessControlList><Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID> 16144ab2929314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a</ID><DisplayName>root</DisplayName></Grantee> <Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy>" }
Retieves the ACL (access control policy) for a bucket or object. Returns a hash of {:owner, :grantees}
s3.get_acl_parse('my_awesome_bucket', 'log/curent/1.log') #=> { :grantees=> { "16...2a"=> { :display_name=>"root", :permissions=>["FULL_CONTROL"], :attributes=> { "xsi:type"=>"CanonicalUser", "xmlns:xsi"=>"http://www.w3.org/2001/XMLSchema-instance"}}, "http://acs.amazonaws.com/groups/global/AllUsers"=> { :display_name=>"AllUsers", :permissions=>["READ"], :attributes=> { "xsi:type"=>"Group", "xmlns:xsi"=>"http://www.w3.org/2001/XMLSchema-instance"}}}, :owner=> { :id=>"16..2a", :display_name=>"root"}}
Retieves the ACL (access control policy) for a bucket. Returns a hash of headers and xml doc with ACL data.
Generates link for ‘GetObject’.
if a bucket comply with virtual hosting naming then retuns a link with the bucket as a part of host name:
s3.get_link('my-awesome-bucket',key) #=> https://my-awesome-bucket.s3.amazonaws.com:443/asia%2Fcustomers?Signature=nh7...
otherwise returns an old style link (the bucket is a part of path):
s3.get_link('my_awesome_bucket',key) #=> https://s3.amazonaws.com:443/my_awesome_bucket/asia%2Fcustomers?Signature=QAO...
see docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html
Retrieves object data only (headers are omitted). Returns string or an exception.
s3.get('my_awesome_bucket', 'log/curent/1.log') #=> 'Ola-la!'
Retrieves object metadata. Returns a hash of http_response_headers.
s3.head('my_awesome_bucket', 'log/curent/1.log') #=> {"last-modified" => "Wed, 23 May 2007 09:08:04 GMT", "content-type" => "", "etag" => "\"000000000096f4ee74bc4596443ef2a4\"", "date" => "Wed, 23 May 2007 09:08:03 GMT", "x-amz-id-2" => "ZZZZZZZZZZZZZZZZZZZZ1HJXZoehfrS4QxcxTdNGldR7w/FVqblP50fU8cuIMLiu", "x-amz-meta-family" => "Woho556!", "x-amz-request-id" => "0000000C246D770C", "server" => "AmazonS3", "content-length" => "7"}
Incrementally list the contents of a bucket. Yields the following hash to a block:
s3.incrementally_list_bucket('my_awesome_bucket', { 'prefix'=>'t', 'marker'=>'', 'max-keys'=>5, delimiter=>'' }) yields { :name => 'bucketname', :prefix => 'subfolder/', :marker => 'fileN.jpg', :max_keys => 234, :delimiter => '/', :is_truncated => true, :next_marker => 'fileX.jpg', :contents => [ { :key => "file1", :last_modified => "2007-05-18T07:00:59.000Z", :e_tag => "000000000059075b964b07152d234b70", :size => 3, :storage_class => "STANDARD", :owner_id => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a", :owner_display_name => "root" }, { :key, ...}, ... {:key, ...} ] :common_prefixes => [ "prefix1", "prefix2", ..., "prefixN" ] }
Returns an array of customer‘s buckets. Each item is a hash.
s3.list_all_my_buckets #=> [{:owner_id => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a", :owner_display_name => "root", :name => "bucket_name", :creation_date => "2007-04-19T18:47:43.000Z"}, ..., {...}]
Returns an array of bucket‘s keys. Each array item (key data) is a hash.
s3.list_bucket('my_awesome_bucket', { 'prefix'=>'t', 'marker'=>'', 'max-keys'=>5, delimiter=>'' }) #=> [{:key => "test1", :last_modified => "2007-05-18T07:00:59.000Z", :owner_id => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a", :owner_display_name => "root", :e_tag => "000000000059075b964b07152d234b70", :storage_class => "STANDARD", :size => 3, :service=> {'is_truncated' => false, 'prefix' => "t", 'marker' => "", 'name' => "my_awesome_bucket", 'max-keys' => "5"}, ..., {...}]
Move an object.
directive: :copy - copy meta-headers from source (default value) :replace - replace meta-headers by passed ones # move bucket1/key1 to bucket1/key2 s3.move('bucket1', 'key1', 'bucket1', 'key2') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:27:22.000Z"} # move bucket1/key1 to bucket2/key2 with new meta-headers assignment s3.copy('bucket1', 'key1', 'bucket2', 'key2', :replace, 'x-amz-meta-family'=>'Woho555!') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:28:22.000Z"}
Saves object to Amazon. Returns true or an exception. Any header starting with AMAZON_METADATA_PREFIX is considered user metadata. It will be stored with the object and returned when you retrieve the object. The total size of the HTTP request, not including the body, must be less than 4 KB.
s3.put('my_awesome_bucket', 'log/current/1.log', 'Ola-la!', 'x-amz-meta-family'=>'Woho556!') #=> true
This method is capable of ‘streaming’ uploads; that is, it can upload data from a file or other IO object without first reading all the data into memory. This is most useful for large PUTs - it is difficult to read a 2 GB file entirely into memory before sending it to S3. To stream an upload, pass an object that responds to ‘read’ (like the read method of IO) and to either ‘lstat’ or ‘size’. For files, this means streaming is enabled by simply making the call:
s3.put(bucket_name, 'S3keyname.forthisfile', File.open('localfilename.dat'))
If the IO object you wish to stream from responds to the read method but doesn‘t implement lstat or size, you can extend the object dynamically to implement these methods, or define your own class which defines these methods. Be sure that your class returns ‘nil’ from read() after having read ‘size’ bytes. Otherwise S3 will drop the socket after ‘Content-Length’ bytes have been uploaded, and HttpConnection will interpret this as an error.
This method now supports very large PUTs, where very large is > 2 GB.
For Win32 users: Files and IO objects should be opened in binary mode. If a text mode IO object is passed to PUT, it will be converted to binary mode.