A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.filter(:author => 'david') old_posts = posts.filter('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
Some methods are added via metaprogramming:
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze | The dataset options that require the removal of cached columns if changed. | |
MUTATION_METHODS | = | %w'add_graph_aliases and distinct except exclude filter from from_self full_outer_join graph group group_and_count group_by having inner_join intersect invert join join_table left_outer_join limit naked or order order_by order_more paginate qualify query reverse reverse_order right_outer_join select select_all select_more server set_defaults set_graph_aliases set_overrides unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql'.collect{|x| x.to_sym} | All methods that should have a ! method added that modifies the receiver. | |
NOTIMPL_MSG | = | "This method must be overridden in Sequel adapters".freeze | ||
WITH_SUPPORTED | = | 'with'.freeze | ||
COMMA_SEPARATOR | = | ', '.freeze | ||
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, LiteralString.new('*'.freeze)).as(:count) | ||
ARRAY_ACCESS_ERROR_MSG | = | 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze | ||
MAP_ERROR_MSG | = | 'Using Dataset#map with an argument and a block is not allowed'.freeze | ||
GET_ERROR_MSG | = | 'must provide argument or block to Dataset#get, not both'.freeze | ||
IMPORT_ERROR_MSG | = | 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze | ||
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze | ||
AND_SEPARATOR | = | " AND ".freeze | ||
BOOL_FALSE | = | "'f'".freeze | ||
BOOL_TRUE | = | "'t'".freeze | ||
COLUMN_REF_RE1 | = | /\A([\w ]+)__([\w ]+)___([\w ]+)\z/.freeze | ||
COLUMN_REF_RE2 | = | /\A([\w ]+)___([\w ]+)\z/.freeze | ||
COLUMN_REF_RE3 | = | /\A([\w ]+)__([\w ]+)\z/.freeze | ||
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :compounds] | ||
DATASET_ALIAS_BASE_NAME | = | 't'.freeze | ||
INSERT_SQL_BASE | = | "INSERT INTO ".freeze | ||
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze | ||
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS | ||
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS | ||
NULL | = | "NULL".freeze | ||
QUALIFY_KEYS | = | [:select, :where, :having, :order, :group] | ||
QUESTION_MARK | = | '?'.freeze | ||
STOCK_COUNT_OPTS | = | {:select => [SQL::AliasedExpression.new(LiteralString.new("COUNT(*)").freeze, :count)], :order => nil}.freeze | ||
SELECT_CLAUSE_ORDER | = | %w'with distinct columns from join where group having compounds order limit'.freeze | ||
TIMESTAMP_FORMAT | = | "'%Y-%m-%d %H:%M:%S%N%z'".freeze | ||
STANDARD_TIMESTAMP_FORMAT | = | "TIMESTAMP #{TIMESTAMP_FORMAT}".freeze | ||
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS | ||
WILDCARD | = | '*'.freeze | ||
SQL_WITH | = | "WITH ".freeze |
inner_join | -> | join |
convert_types | [RW] | Whether to convert some Java types to ruby types when retrieving rows. Uses the database‘s setting by default, can be set to false to roughly double performance when fetching rows. |
db | [RW] | The database that corresponds to this dataset |
identifier_input_method | [RW] | Set the method to call on identifiers going into the database for this dataset |
identifier_output_method | [RW] | Set the method to call on identifiers coming the database for this dataset |
opts | [RW] | The hash of options for this dataset, keys are symbols. |
quote_identifiers | [W] | Whether to quote identifiers for this dataset |
row_proc | [RW] | The row_proc for this database, should be a Proc that takes a single hash argument and returns the object you want each to return. |
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
# File lib/sequel/dataset.rb, line 98 98: def self.def_mutation_method(*meths) 99: meths.each do |meth| 100: class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end") 101: end 102: end
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adaptor should provide a subclass of Sequel::Dataset, and have the Database#dataset method return an instance of that class.
# File lib/sequel/dataset.rb, line 84 84: def initialize(db, opts = nil) 85: @db = db 86: @quote_identifiers = db.quote_identifiers? if db.respond_to?(:quote_identifiers?) 87: @identifier_input_method = db.identifier_input_method if db.respond_to?(:identifier_input_method) 88: @identifier_output_method = db.identifier_output_method if db.respond_to?(:identifier_output_method) 89: @opts = opts || {} 90: @row_proc = nil 91: end
Returns the first record matching the conditions. Examples:
ds[:id=>1] => {:id=1}
# File lib/sequel/dataset/convenience.rb, line 13 13: def [](*conditions) 14: raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 15: first(*conditions) 16: end
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list. See set_graph_aliases.
# File lib/sequel/dataset/graph.rb, line 6 6: def add_graph_aliases(graph_aliases) 7: ds = select_more(*graph_alias_columns(graph_aliases)) 8: ds.opts[:graph_aliases] = (ds.opts[:graph_aliases] || ds.opts[:graph][:column_aliases] || {}).merge(graph_aliases) 9: ds 10: end
Adds an further filter to an existing filter using AND. If no filter exists an error is raised. This method is identical to filter except it expects an existing filter.
ds.filter(:a).and(:b) # SQL: WHERE a AND b
# File lib/sequel/dataset/sql.rb, line 31 31: def and(*cond, &block) 32: raise(InvalidOperation, "No existing filter found.") unless @opts[:having] || @opts[:where] 33: filter(*cond, &block) 34: end
Returns the average value for the given column.
# File lib/sequel/dataset/convenience.rb, line 27 27: def avg(column) 28: get{|o| o.avg(column)} 29: end
For the given type (:select, :insert, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash of passed to insert or update (if one of those types is used), which may contain placeholders.
# File lib/sequel/dataset/prepared_statements.rb, line 176 176: def call(type, bind_variables={}, values=nil) 177: prepare(type, nil, values).call(bind_variables) 178: end
SQL fragment for specifying given CaseExpression.
# File lib/sequel/dataset/sql.rb, line 47 47: def case_expression_sql(ce) 48: sql = '(CASE ' 49: sql << "#{literal(ce.expression)} " if ce.expression 50: ce.conditions.collect{ |c,r| 51: sql << "WHEN #{literal(c)} THEN #{literal(r)} " 52: } 53: sql << "ELSE #{literal(ce.default)} END)" 54: end
Returns a new clone of the dataset with with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted.
# File lib/sequel/dataset.rb, line 132 132: def clone(opts = {}) 133: c = super() 134: c.opts = @opts.merge(opts) 135: c.instance_variable_set(:@columns, nil) if opts.keys.any?{|o| COLUMN_CHANGE_OPTS.include?(o)} 136: c 137: end
Returns the columns in the result set in order. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to get a single row. Adapters are expected to fill the columns cache with the column information when a query is performed. If the dataset does not have any rows, this may be an empty array depending on how the adapter is programmed.
If you are looking for all columns for a single table and maybe some information about each column (e.g. type), see Database#schema.
# File lib/sequel/dataset.rb, line 148 148: def columns 149: return @columns if @columns 150: ds = unfiltered.unordered.clone(:distinct => nil, :limit => 1) 151: ds.each{break} 152: @columns = ds.instance_variable_get(:@columns) 153: @columns || [] 154: end
SQL fragment for complex expressions
# File lib/sequel/dataset/sql.rb, line 67 67: def complex_expression_sql(op, args) 68: case op 69: when *IS_OPERATORS 70: r = args.at(1) 71: if r.nil? || supports_is_true? 72: raise(InvalidOperation, 'Invalid argument used for IS operator') unless v = IS_LITERALS[r] 73: "(#{literal(args.at(0))} #{op} #{v})" 74: elsif op == :IS 75: complex_expression_sql("=""=", args) 76: else 77: complex_expression_sql(:OR, [SQL::BooleanExpression.new("!=""!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) 78: end 79: when *TWO_ARITY_OPERATORS 80: "(#{literal(args.at(0))} #{op} #{literal(args.at(1))})" 81: when *N_ARITY_OPERATORS 82: "(#{args.collect{|a| literal(a)}.join(" #{op} ")})" 83: when :NOT 84: "NOT #{literal(args.at(0))}" 85: when :NOOP 86: literal(args.at(0)) 87: when 'B~''B~' 88: "~#{literal(args.at(0))}" 89: else 90: raise(InvalidOperation, "invalid operator #{op}") 91: end 92: end
Returns the number of records in the dataset.
# File lib/sequel/dataset/sql.rb, line 100 100: def count 101: options_overlap(COUNT_FROM_SELF_OPTS) ? from_self.count : clone(STOCK_COUNT_OPTS).single_value.to_i 102: end
Add a mutation method to this dataset instance.
# File lib/sequel/dataset.rb, line 164 164: def def_mutation_method(*meths) 165: meths.each do |meth| 166: instance_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end") 167: end 168: end
Deletes the records in the dataset. The returned value is generally the number of records deleted, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 172 172: def delete 173: execute_dui(delete_sql) 174: end
Formats a DELETE statement using the given options and dataset options.
dataset.filter{|o| o.price >= 100}.delete_sql #=> "DELETE FROM items WHERE (price >= 100)"
# File lib/sequel/dataset/sql.rb, line 108 108: def delete_sql 109: opts = @opts 110: 111: return static_sql(opts[:sql]) if opts[:sql] 112: 113: check_modification_allowed! 114: 115: sql = "DELETE FROM #{source_list(opts[:from])}" 116: 117: if where = opts[:where] 118: sql << " WHERE #{literal(where)}" 119: end 120: 121: sql 122: end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. Raises an error if arguments are given and DISTINCT ON is not supported.
dataset.distinct # SQL: SELECT DISTINCT * FROM items dataset.order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
# File lib/sequel/dataset/sql.rb, line 133 133: def distinct(*args) 134: raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? 135: clone(:distinct => args) 136: end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you use should all instead of each.
# File lib/sequel/dataset.rb, line 182 182: def each(&block) 183: if @opts[:graph] 184: graph_each(&block) 185: else 186: if row_proc = @row_proc 187: fetch_rows(select_sql){|r| yield row_proc.call(r)} 188: else 189: fetch_rows(select_sql, &block) 190: end 191: end 192: self 193: end
Yields a paginated dataset for each page and returns the receiver. Does a count to find the total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 20 20: def each_page(page_size, &block) 21: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 22: record_count = count 23: total_pages = (record_count / page_size.to_f).ceil 24: (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} 25: self 26: end
Returns true if no records exist in the dataset, false otherwise
# File lib/sequel/dataset/convenience.rb, line 32 32: def empty? 33: get(1).nil? 34: end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
DB[:items].except(DB).sql #=> "SELECT * FROM items EXCEPT SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 148 148: def except(dataset, opts={}) 149: opts = {:all=>opts} unless opts.is_a?(Hash) 150: raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 151: raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 152: compound_clone(:except, dataset, opts) 153: end
Performs the inverse of Dataset#filter.
dataset.exclude(:category => 'software').sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/sql.rb, line 159 159: def exclude(*cond, &block) 160: clause = (@opts[:having] ? :having : :where) 161: cond = cond.first if cond.size == 1 162: cond = filter_expr(cond, &block) 163: cond = SQL::BooleanExpression.invert(cond) 164: cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause] 165: clone(clause => cond) 166: end
Returns an EXISTS clause for the dataset as a LiteralString.
DB.select(1).where(DB[:items].exists).sql #=> "SELECT 1 WHERE EXISTS (SELECT * FROM items)"
# File lib/sequel/dataset/sql.rb, line 172 172: def exists 173: LiteralString.new("EXISTS (#{select_sql})") 174: end
Execute the SQL on the database and yield the rows as hashes with symbol keys.
# File lib/sequel/adapters/do.rb, line 191 191: def fetch_rows(sql) 192: execute(sql) do |reader| 193: cols = @columns = reader.fields.map{|f| output_identifier(f)} 194: while(reader.next!) do 195: h = {} 196: cols.zip(reader.values).each{|k, v| h[k] = v} 197: yield h 198: end 199: end 200: self 201: end
Returns a copy of the dataset with the given conditions imposed upon it. If the query already has a HAVING clause, then the conditions are imposed in the HAVING clause. If not, then they are imposed in the WHERE clause.
filter accepts the following argument types:
filter also takes a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions.
If both a block and regular argument are provided, they get ANDed together.
Examples:
dataset.filter(:id => 3).sql #=> "SELECT * FROM items WHERE (id = 3)" dataset.filter('price < ?', 100).sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter([[:id, (1,2,3)], [:id, 0..10]]).sql #=> "SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10)))" dataset.filter('price < 100').sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter(:active).sql #=> "SELECT * FROM items WHERE :active dataset.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE (price < 100)"
Multiple filter calls can be chained for scoping:
software = dataset.filter(:category => 'software') software.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE ((category = 'software') AND (price < 100))"
See doc/dataset_filters.rdoc for more examples and details.
# File lib/sequel/dataset/sql.rb, line 223 223: def filter(*cond, &block) 224: _filter(@opts[:having] ? :having : :where, *cond, &block) 225: end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything. Examples:
ds.first => {:id=>7} ds.first(2) => [{:id=>6}, {:id=>4}] ds.order(:id).first(2) => [{:id=>1}, {:id=>2}] ds.first(:id=>2) => {:id=>2} ds.first("id = 3") => {:id=>3} ds.first("id = ?", 4) => {:id=>4} ds.first{|o| o.id > 2} => {:id=>5} ds.order(:id).first{|o| o.id > 2} => {:id=>3} ds.first{|o| o.id > 2} => {:id=>5} ds.first("id > ?", 4){|o| o.id < 6} => {:id=>5} ds.order(:id).first(2){|o| o.id < 2} => [{:id=>1}]
# File lib/sequel/dataset/convenience.rb, line 55 55: def first(*args, &block) 56: ds = block ? filter(&block) : self 57: 58: if args.empty? 59: ds.single_record 60: else 61: args = (args.size == 1) ? args.first : args 62: if Integer === args 63: ds.limit(args).all 64: else 65: ds.filter(args).single_record 66: end 67: end 68: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the aliased name.
# File lib/sequel/dataset/sql.rb, line 229 229: def first_source_alias 230: source = @opts[:from] 231: if source.nil? || source.empty? 232: raise Error, 'No source specified for query' 233: end 234: case s = source.first 235: when SQL::AliasedExpression 236: s.aliaz 237: when Symbol 238: sch, table, aliaz = split_symbol(s) 239: aliaz ? aliaz.to_sym : s 240: else 241: s 242: end 243: end
Returns a copy of the dataset with the source changed.
dataset.from # SQL: SELECT * dataset.from(:blah) # SQL: SELECT * FROM blah dataset.from(:blah, :foo) # SQL: SELECT * FROM blah, foo
# File lib/sequel/dataset/sql.rb, line 251 251: def from(*source) 252: table_alias_num = 0 253: sources = [] 254: source.each do |s| 255: case s 256: when Hash 257: s.each{|k,v| sources << SQL::AliasedExpression.new(k,v)} 258: when Dataset 259: sources << SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 260: when Symbol 261: sch, table, aliaz = split_symbol(s) 262: if aliaz 263: s = sch ? SQL::QualifiedIdentifier.new(sch.to_sym, table.to_sym) : SQL::Identifier.new(table.to_sym) 264: sources << SQL::AliasedExpression.new(s, aliaz.to_sym) 265: else 266: sources << s 267: end 268: else 269: sources << s 270: end 271: end 272: o = {:from=>sources.empty? ? nil : sources} 273: o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 274: clone(o) 275: end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the name of the result.
ds = DB[:items].order(:name).select(:id, :name) ds.sql #=> "SELECT id,name FROM items ORDER BY name" ds.from_self.sql #=> "SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS 't1'" ds.from_self(:alias=>:foo).sql #=> "SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS 'foo'"
# File lib/sequel/dataset/sql.rb, line 284 284: def from_self(opts={}) 285: fs = {} 286: @opts.keys.each{|k| fs[k] = nil} 287: clone(fs).from(opts[:alias] ? as(opts[:alias]) : self) 288: end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
ds.get(:id) ds.get{|o| o.sum(:id)}
# File lib/sequel/dataset/convenience.rb, line 75 75: def get(column=nil, &block) 76: if column 77: raise(Error, GET_ERROR_MSG) if block 78: select(column).single_value 79: else 80: select(&block).single_value 81: end 82: end
Allows you to join multiple datasets/tables and have the result set split into component tables.
This differs from the usual usage of join, which returns the result set as a single hash. For example:
# CREATE TABLE artists (id INTEGER, name TEXT); # CREATE TABLE albums (id INTEGER, name TEXT, artist_id INTEGER); DB[:artists].left_outer_join(:albums, :artist_id=>:id).first => {:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id} DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>{:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id}}
Using a join such as left_outer_join, the attribute names that are shared between the tables are combined in the single return hash. You can get around that by using .select with correct aliases for all of the columns, but it is simpler to use graph and have the result set split for you. In addition, graph respects any row_proc of the current dataset and the datasets you use with graph.
If you are graphing a table and all columns for that table are nil, this indicates that no matching rows existed in the table, so graph will return nil instead of a hash with all nil values:
# If the artist doesn't have any albums DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>nil}
Arguments:
# File lib/sequel/dataset/graph.rb, line 56 56: def graph(dataset, join_conditions = nil, options = {}, &block) 57: # Allow the use of a model, dataset, or symbol as the first argument 58: # Find the table name/dataset based on the argument 59: dataset = dataset.dataset if dataset.respond_to?(:dataset) 60: table_alias = options[:table_alias] 61: case dataset 62: when Symbol 63: table = dataset 64: dataset = @db[dataset] 65: table_alias ||= table 66: when ::Sequel::Dataset 67: if dataset.simple_select_all? 68: table = dataset.opts[:from].first 69: table_alias ||= table 70: else 71: table = dataset 72: table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 73: end 74: else 75: raise Error, "The dataset argument should be a symbol, dataset, or model" 76: end 77: 78: # Raise Sequel::Error with explanation that the table alias has been used 79: raise_alias_error = lambda do 80: raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 81: "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 82: end 83: 84: # Only allow table aliases that haven't been used 85: raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 86: 87: # Join the table early in order to avoid cloning the dataset twice 88: ds = join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], &block) 89: opts = ds.opts 90: 91: # Whether to include the table in the result set 92: add_table = options[:select] == false ? false : true 93: # Whether to add the columns to the list of column aliases 94: add_columns = !ds.opts.include?(:graph_aliases) 95: 96: # Setup the initial graph data structure if it doesn't exist 97: unless graph = opts[:graph] 98: master = ds.first_source_alias 99: raise_alias_error.call if master == table_alias 100: # Master hash storing all .graph related information 101: graph = opts[:graph] = {} 102: # Associates column aliases back to tables and columns 103: column_aliases = graph[:column_aliases] = {} 104: # Associates table alias (the master is never aliased) 105: table_aliases = graph[:table_aliases] = {master=>self} 106: # Keep track of the alias numbers used 107: ca_num = graph[:column_alias_num] = Hash.new(0) 108: # All columns in the master table are never 109: # aliased, but are not included if set_graph_aliases 110: # has been used. 111: if add_columns 112: select = opts[:select] = [] 113: columns.each do |column| 114: column_aliases[column] = [master, column] 115: select.push(SQL::QualifiedIdentifier.new(master, column)) 116: end 117: end 118: end 119: 120: # Add the table alias to the list of aliases 121: # Even if it isn't been used in the result set, 122: # we add a key for it with a nil value so we can check if it 123: # is used more than once 124: table_aliases = graph[:table_aliases] 125: table_aliases[table_alias] = add_table ? dataset : nil 126: 127: # Add the columns to the selection unless we are ignoring them 128: if add_table && add_columns 129: select = opts[:select] 130: column_aliases = graph[:column_aliases] 131: ca_num = graph[:column_alias_num] 132: # Which columns to add to the result set 133: cols = options[:select] || dataset.columns 134: # If the column hasn't been used yet, don't alias it. 135: # If it has been used, try table_column. 136: # If that has been used, try table_column_N 137: # using the next value of N that we know hasn't been 138: # used 139: cols.each do |column| 140: col_alias, identifier = if column_aliases[column] 141: column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}" 142: if column_aliases[column_alias] 143: column_alias_num = ca_num[column_alias] 144: column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 145: ca_num[column_alias] += 1 146: end 147: [column_alias, SQL::QualifiedIdentifier.new(table_alias, column).as(column_alias)] 148: else 149: [column, SQL::QualifiedIdentifier.new(table_alias, column)] 150: end 151: column_aliases[col_alias] = [table_alias, column] 152: select.push(identifier) 153: end 154: end 155: ds 156: end
Pattern match any of the columns to any of the terms. The terms can be strings (which use LIKE) or regular expressions (which are only supported in some databases). See Sequel::SQL::StringExpression.like. Note that the total number of pattern matches will be cols.length * terms.length, which could cause performance issues.
dataset.grep(:a, '%test%') # SQL: SELECT * FROM items WHERE a LIKE '%test%' dataset.grep([:a, :b], %w'%test% foo') # SQL: SELECT * FROM items WHERE a LIKE '%test%' OR a LIKE 'foo' OR b LIKE '%test%' OR b LIKE 'foo'
# File lib/sequel/dataset/sql.rb, line 304 304: def grep(cols, terms) 305: filter(SQL::BooleanExpression.new(:OR, *Array(cols).collect{|c| SQL::StringExpression.like(c, *terms)})) 306: end
Returns a copy of the dataset with the results grouped by the value of the given columns.
dataset.group(:id) # SELECT * FROM items GROUP BY id dataset.group(:id, :name) # SELECT * FROM items GROUP BY id, name
# File lib/sequel/dataset/sql.rb, line 313 313: def group(*columns) 314: clone(:group => (columns.compact.empty? ? nil : columns)) 315: end
Returns a dataset grouped by the given column with count by group, order by the count of records. Examples:
ds.group_and_count(:name) => [{:name=>'a', :count=>1}, ...] ds.group_and_count(:first_name, :last_name) => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...]
# File lib/sequel/dataset/convenience.rb, line 89 89: def group_and_count(*columns) 90: group(*columns).select(*(columns + [COUNT_OF_ALL_AS_COUNT])).order(:count) 91: end
Returns a copy of the dataset with the HAVING conditions changed. Raises an error if the dataset has not been grouped. See filter for argument types.
dataset.group(:sum).having(:sum=>10) # SQL: SELECT * FROM items GROUP BY sum HAVING sum = 10
# File lib/sequel/dataset/sql.rb, line 322 322: def having(*cond, &block) 323: raise(InvalidOperation, "Can only specify a HAVING clause on a grouped dataset") unless @opts[:group] 324: _filter(:having, *cond, &block) 325: end
Inserts multiple records into the associated table. This method can be to efficiently insert a large amounts of records into a table. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
dataset.import([:x, :y], [[1, 2], [3, 4]])
This method also accepts a dataset instead of an array of value arrays:
dataset.import([:x, :y], other_dataset.select(:a___x, :b___y))
The method also accepts a :slice or :commit_every option that specifies the number of records to insert per transaction. This is useful especially when inserting a large number of records, e.g.:
# this will commit every 50 records dataset.import([:x, :y], [[1, 2], [3, 4], ...], :slice => 50)
# File lib/sequel/dataset/convenience.rb, line 111 111: def import(columns, values, opts={}) 112: return @db.transaction{execute_dui("#{insert_sql_base}#{quote_schema_table(@opts[:from].first)} (#{identifier_list(columns)}) #{subselect_sql(values)}")} if values.is_a?(Dataset) 113: 114: return if values.empty? 115: raise(Error, IMPORT_ERROR_MSG) if columns.empty? 116: 117: if slice_size = opts[:commit_every] || opts[:slice] 118: offset = 0 119: loop do 120: @db.transaction(opts){multi_insert_sql(columns, values[offset, slice_size]).each{|st| execute_dui(st)}} 121: offset += slice_size 122: break if offset >= values.length 123: end 124: else 125: statements = multi_insert_sql(columns, values) 126: @db.transaction{statements.each{|st| execute_dui(st)}} 127: end 128: end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 203 203: def insert(*values) 204: execute_insert(insert_sql(*values)) 205: end
Inserts multiple values. If a block is given it is invoked for each item in the given array before inserting it. See multi_insert as a possible faster version that inserts multiple records in one SQL statement.
# File lib/sequel/dataset/sql.rb, line 331 331: def insert_multiple(array, &block) 332: if block 333: array.each {|i| insert(block[i])} 334: else 335: array.each {|i| insert(i)} 336: end 337: end
Formats an INSERT statement using the given values. If a hash is given, the resulting statement includes column names. If no values are given, the resulting statement includes a DEFAULT VALUES clause.
dataset.insert_sql #=> 'INSERT INTO items DEFAULT VALUES' dataset.insert_sql(1,2,3) #=> 'INSERT INTO items VALUES (1, 2, 3)' dataset.insert_sql(:a => 1, :b => 2) #=> 'INSERT INTO items (a, b) VALUES (1, 2)'
# File lib/sequel/dataset/sql.rb, line 347 347: def insert_sql(*values) 348: return static_sql(@opts[:sql]) if @opts[:sql] 349: 350: check_modification_allowed! 351: 352: from = source_list(@opts[:from]) 353: case values.size 354: when 0 355: values = {} 356: when 1 357: vals = values.at(0) 358: if [Hash, Dataset, Array].any?{|c| vals.is_a?(c)} 359: values = vals 360: elsif vals.respond_to?(:values) 361: values = vals.values 362: end 363: end 364: 365: case values 366: when Array 367: if values.empty? 368: insert_default_values_sql 369: else 370: "#{insert_sql_base}#{from} VALUES #{literal(values)}#{insert_sql_suffix}" 371: end 372: when Hash 373: values = @opts[:defaults].merge(values) if @opts[:defaults] 374: values = values.merge(@opts[:overrides]) if @opts[:overrides] 375: if values.empty? 376: insert_default_values_sql 377: else 378: fl, vl = [], [] 379: values.each do |k, v| 380: fl << literal(String === k ? k.to_sym : k) 381: vl << literal(v) 382: end 383: "#{insert_sql_base}#{from} (#{fl.join(COMMA_SEPARATOR)}) VALUES (#{vl.join(COMMA_SEPARATOR)})#{insert_sql_suffix}" 384: end 385: when Dataset 386: "#{insert_sql_base}#{from} #{literal(values)}#{insert_sql_suffix}" 387: end 388: end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
DB[:items].intersect(DB).sql #=> "SELECT * FROM items INTERSECT SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 400 400: def intersect(dataset, opts={}) 401: opts = {:all=>opts} unless opts.is_a?(Hash) 402: raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 403: raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 404: compound_clone(:intersect, dataset, opts) 405: end
Inverts the current filter
dataset.filter(:category => 'software').invert.sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/sql.rb, line 411 411: def invert 412: having, where = @opts[:having], @opts[:where] 413: raise(Error, "No current filter") unless having || where 414: o = {} 415: o[:having] = SQL::BooleanExpression.invert(having) if having 416: o[:where] = SQL::BooleanExpression.invert(where) if where 417: clone(o) 418: end
SQL fragment specifying a JOIN clause without ON or USING.
# File lib/sequel/dataset/sql.rb, line 421 421: def join_clause_sql(jc) 422: table = jc.table 423: table_alias = jc.table_alias 424: table_alias = nil if table == table_alias 425: tref = table_ref(table) 426: " #{join_type_sql(jc.join_type)} #{table_alias ? as_sql(tref, table_alias) : tref}" 427: end
Returns a joined dataset. Uses the following arguments:
# File lib/sequel/dataset/sql.rb, line 469 469: def join_table(type, table, expr=nil, options={}, &block) 470: if [Symbol, String].any?{|c| options.is_a?(c)} 471: table_alias = options 472: last_alias = nil 473: else 474: table_alias = options[:table_alias] 475: last_alias = options[:implicit_qualifier] 476: end 477: if Dataset === table 478: if table_alias.nil? 479: table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 480: table_alias = dataset_alias(table_alias_num) 481: end 482: table_name = table_alias 483: else 484: table = table.table_name if table.respond_to?(:table_name) 485: table_name = table_alias || table 486: end 487: 488: join = if expr.nil? and !block_given? 489: SQL::JoinClause.new(type, table, table_alias) 490: elsif Array === expr and !expr.empty? and expr.all?{|x| Symbol === x} 491: raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block_given? 492: SQL::JoinUsingClause.new(expr, type, table, table_alias) 493: else 494: last_alias ||= @opts[:last_joined_table] || first_source_alias 495: if Sequel.condition_specifier?(expr) 496: expr = expr.collect do |k, v| 497: k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 498: v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 499: [k,v] 500: end 501: end 502: if block_given? 503: expr2 = yield(table_name, last_alias, @opts[:join] || []) 504: expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 505: end 506: SQL::JoinOnClause.new(expr, type, table, table_alias) 507: end 508: 509: opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} 510: opts[:num_dataset_sources] = table_alias_num if table_alias_num 511: clone(opts) 512: end
Reverses the order and then runs first. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
# File lib/sequel/dataset/convenience.rb, line 140 140: def last(*args, &block) 141: raise(Error, 'No order specified') unless @opts[:order] 142: reverse.first(*args, &block) 143: end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset.
dataset.limit(10) # SQL: SELECT * FROM items LIMIT 10 dataset.limit(10, 20) # SQL: SELECT * FROM items LIMIT 10 OFFSET 20
# File lib/sequel/dataset/sql.rb, line 520 520: def limit(l, o = nil) 521: return from_self.limit(l, o) if @opts[:sql] 522: 523: if Range === l 524: o = l.first 525: l = l.last - l.first + (l.exclude_end? ? 0 : 1) 526: end 527: l = l.to_i 528: raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 529: opts = {:limit => l} 530: if o 531: o = o.to_i 532: raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 533: opts[:offset] = o 534: end 535: clone(opts) 536: end
Returns a literal representation of a value to be used as part of an SQL expression.
dataset.literal("abc'def\\") #=> "'abc''def\\\\'" dataset.literal(:items__id) #=> "items.id" dataset.literal([1, 2, 3]) => "(1, 2, 3)" dataset.literal(DB[:items]) => "(SELECT * FROM items)" dataset.literal(:x + 1 > :y) => "((x + 1) > y)"
If an unsupported object is given, an exception is raised.
# File lib/sequel/dataset/sql.rb, line 548 548: def literal(v) 549: case v 550: when String 551: return v if v.is_a?(LiteralString) 552: v.is_a?(SQL::Blob) ? literal_blob(v) : literal_string(v) 553: when Symbol 554: literal_symbol(v) 555: when Integer 556: literal_integer(v) 557: when Hash 558: literal_hash(v) 559: when SQL::Expression 560: literal_expression(v) 561: when Float 562: literal_float(v) 563: when BigDecimal 564: literal_big_decimal(v) 565: when NilClass 566: NULL 567: when TrueClass 568: literal_true 569: when FalseClass 570: literal_false 571: when Array 572: literal_array(v) 573: when Time 574: literal_time(v) 575: when DateTime 576: literal_datetime(v) 577: when Date 578: literal_date(v) 579: when Dataset 580: literal_dataset(v) 581: else 582: literal_other(v) 583: end 584: end
Maps column values for each record in the dataset (if a column name is given), or performs the stock mapping functionality of Enumerable. Raises an error if both an argument and block are given. Examples:
ds.map(:id) => [1, 2, 3, ...] ds.map{|r| r[:id] * 2} => [2, 4, 6, ...]
# File lib/sequel/dataset/convenience.rb, line 151 151: def map(column=nil, &block) 152: if column 153: raise(Error, MAP_ERROR_MSG) if block 154: super(){|r| r[column]} 155: else 156: super(&block) 157: end 158: end
Returns the maximum value for the given column.
# File lib/sequel/dataset/convenience.rb, line 161 161: def max(column) 162: get{|o| o.max(column)} 163: end
Returns the minimum value for the given column.
# File lib/sequel/dataset/convenience.rb, line 166 166: def min(column) 167: get{|o| o.min(column)} 168: end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
dataset.multi_insert([{:x => 1}, {:x => 2}])
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
You can also use the :slice or :commit_every option that import accepts.
# File lib/sequel/dataset/convenience.rb, line 180 180: def multi_insert(hashes, opts={}) 181: return if hashes.empty? 182: columns = hashes.first.keys 183: import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 184: end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 592 592: def multi_insert_sql(columns, values) 593: s = "#{insert_sql_base}#{source_list(@opts[:from])} (#{identifier_list(columns)}) VALUES " 594: values.map{|r| s + literal(r)} 595: end
Adds an alternate filter to an existing filter using OR. If no filter exists an error is raised.
dataset.filter(:a).or(:b) # SQL: SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/sql.rb, line 601 601: def or(*cond, &block) 602: clause = (@opts[:having] ? :having : :where) 603: raise(InvalidOperation, "No existing filter found.") unless @opts[clause] 604: cond = cond.first if cond.size == 1 605: clone(clause => SQL::BooleanExpression.new(:OR, @opts[clause], filter_expr(cond, &block))) 606: end
Returns a copy of the dataset with the order changed. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, and even SQL functions. If a block is given, it is treated as a virtual row block, similar to filter.
ds.order(:name).sql #=> 'SELECT * FROM items ORDER BY name' ds.order(:a, :b).sql #=> 'SELECT * FROM items ORDER BY a, b' ds.order('a + b'.lit).sql #=> 'SELECT * FROM items ORDER BY a + b' ds.order(:a + :b).sql #=> 'SELECT * FROM items ORDER BY (a + b)' ds.order(:name.desc).sql #=> 'SELECT * FROM items ORDER BY name DESC' ds.order(:name.asc).sql #=> 'SELECT * FROM items ORDER BY name ASC' ds.order{|o| o.sum(:name)}.sql #=> 'SELECT * FROM items ORDER BY sum(name)' ds.order(nil).sql #=> 'SELECT * FROM items'
# File lib/sequel/dataset/sql.rb, line 621 621: def order(*columns, &block) 622: columns += Array(virtual_row_block_call(block)) if block 623: clone(:order => (columns.compact.empty?) ? nil : columns) 624: end
Returns a copy of the dataset with the order columns added to the existing order.
ds.order(:a).order(:b).sql #=> 'SELECT * FROM items ORDER BY b' ds.order(:a).order_more(:b).sql #=> 'SELECT * FROM items ORDER BY a, b'
# File lib/sequel/dataset/sql.rb, line 632 632: def order_more(*columns, &block) 633: order(*Array(@opts[:order]).concat(columns), &block) 634: end
Returns a paginated dataset. The returned dataset is limited to the page size at the correct offset, and extended with the Pagination module. If a record count is not provided, does a count of total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 11 11: def paginate(page_no, page_size, record_count=nil) 12: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 13: paginated = limit(page_size, (page_no - 1) * page_size) 14: paginated.extend(Pagination) 15: paginated.set_pagination_info(page_no, page_size, record_count || count) 16: end
Prepare an SQL statement for later execution. This returns a clone of the dataset extended with PreparedStatementMethods, on which you can call call with the hash of bind variables to do substitution. The prepared statement is also stored in the associated database. The following usage is identical:
ps = prepare(:select, :select_by_name) ps.call(:name=>'Blah') db.call(:select_by_name, :name=>'Blah')
# File lib/sequel/dataset/prepared_statements.rb, line 189 189: def prepare(type, name=nil, values=nil) 190: ps = to_prepared_statement(type, values) 191: db.prepared_statements[name] = ps if name 192: ps 193: end
Create a named prepared statement that is stored in the database (and connection) for reuse.
# File lib/sequel/adapters/jdbc.rb, line 458 458: def prepare(type, name=nil, values=nil) 459: ps = to_prepared_statement(type, values) 460: ps.extend(PreparedStatementMethods) 461: if name 462: ps.prepared_statement_name = name 463: db.prepared_statements[name] = ps 464: end 465: ps 466: end
SQL fragment for the qualifed identifier, specifying a table and a column (or schema and table).
# File lib/sequel/dataset/sql.rb, line 652 652: def qualified_identifier_sql(qcr) 653: [qcr.table, qcr.column].map{|x| [SQL::QualifiedIdentifier, SQL::Identifier, Symbol].any?{|c| x.is_a?(c)} ? literal(x) : quote_identifier(x)}.join('.') 654: end
Return a copy of the dataset with unqualified identifiers in the SELECT, WHERE, GROUP, HAVING, and ORDER clauses qualified by the given table. If no columns are currently selected, select all columns of the given table.
# File lib/sequel/dataset/sql.rb, line 665 665: def qualify_to(table) 666: o = @opts 667: return clone if o[:sql] 668: h = {} 669: (o.keys & QUALIFY_KEYS).each do |k| 670: h[k] = qualified_expression(o[k], table) 671: end 672: h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? 673: clone(h) 674: end
Qualify the dataset to its current first source. This is useful if you have unqualified identifiers in the query that all refer to the first source, and you want to join to another table which has columns with the same name as columns in the current dataset. See qualify_to.
# File lib/sequel/dataset/sql.rb, line 681 681: def qualify_to_first_source 682: qualify_to(first_source) 683: end
Translates a query block into a dataset. Query blocks can be useful when expressing complex SELECT statements, e.g.:
dataset = DB[:items].query do select :x, :y, :z filter{|o| (o.x > 1) & (o.y > 2)} order :z.desc end
Which is the same as:
dataset = DB[:items].select(:x, :y, :z).filter{|o| (o.x > 1) & (o.y > 2)}.order(:z.desc)
Note that inside a call to query, you cannot call each, insert, update, or delete (or any method that calls those), or Sequel will raise an error.
# File lib/sequel/extensions/query.rb, line 30 30: def query(&block) 31: copy = clone({}) 32: copy.extend(QueryBlockCopy) 33: copy.instance_eval(&block) 34: clone(copy.opts) 35: end
Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 688 688: def quote_identifier(name) 689: return name if name.is_a?(LiteralString) 690: name = name.value if name.is_a?(SQL::Identifier) 691: name = input_identifier(name) 692: name = quoted_identifier(name) if quote_identifiers? 693: name 694: end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset.rb, line 222 222: def quote_identifiers? 223: @quote_identifiers 224: end
Separates the schema from the table and returns a string with them quoted (if quoting identifiers)
# File lib/sequel/dataset/sql.rb, line 698 698: def quote_schema_table(table) 699: schema, table = schema_and_table(table) 700: "#{"#{quote_identifier(schema)}." if schema}#{quote_identifier(table)}" 701: end
This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 706 706: def quoted_identifier(name) 707: "\"#{name.to_s.gsub('"', '""')}\"" 708: end
Split the schema information from the table
# File lib/sequel/dataset/sql.rb, line 718 718: def schema_and_table(table_name) 719: sch = db.default_schema if db 720: case table_name 721: when Symbol 722: s, t, a = split_symbol(table_name) 723: [s||sch, t] 724: when SQL::QualifiedIdentifier 725: [table_name.table, table_name.column] 726: when SQL::Identifier 727: [sch, table_name.value] 728: when String 729: [sch, table_name] 730: else 731: raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 732: end 733: end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to filter.
dataset.select(:a) # SELECT a FROM items dataset.select(:a, :b) # SELECT a, b FROM items dataset.select{|o| o.a, o.sum(:b)} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/sql.rb, line 742 742: def select(*columns, &block) 743: columns += Array(virtual_row_block_call(block)) if block 744: m = [] 745: columns.map do |i| 746: i.is_a?(Hash) ? m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) : m << i 747: end 748: clone(:select => m) 749: end
Returns a copy of the dataset selecting the wildcard.
dataset.select(:a).select_all # SELECT * FROM items
# File lib/sequel/dataset/sql.rb, line 754 754: def select_all 755: clone(:select => nil) 756: end
Returns a copy of the dataset with the given columns added to the existing selected columns.
dataset.select(:a).select(:b) # SELECT b FROM items dataset.select(:a).select_more(:b) # SELECT a, b FROM items
# File lib/sequel/dataset/sql.rb, line 763 763: def select_more(*columns, &block) 764: select(*Array(@opts[:select]).concat(columns), &block) 765: end
Formats a SELECT statement
dataset.select_sql # => "SELECT * FROM items"
# File lib/sequel/dataset/sql.rb, line 770 770: def select_sql 771: return static_sql(@opts[:sql]) if @opts[:sql] 772: sql = 'SELECT' 773: select_clause_order.each{|x| send("select_#{x}_sql""select_#{x}_sql", sql)} 774: sql 775: end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (which is SELECT uses :read_only database and all other queries use the :default database).
# File lib/sequel/dataset.rb, line 235 235: def server(servr) 236: clone(:server=>servr) 237: end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of .select for a graphed dataset, and must be used instead of .select whenever graphing is used. Example:
DB[:artists].graph(:albums, :artist_id=>:id).set_graph_aliases(:artist_name=>[:artists, :name], :album_name=>[:albums, :name], :forty_two=>[:albums, :fourtwo, 42]).first => {:artists=>{:name=>artists.name}, :albums=>{:name=>albums.name, :fourtwo=>42}}
Arguments:
# File lib/sequel/dataset/graph.rb, line 175 175: def set_graph_aliases(graph_aliases) 176: ds = select(*graph_alias_columns(graph_aliases)) 177: ds.opts[:graph_aliases] = graph_aliases 178: ds 179: end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb, line 778 778: def sql 779: select_sql 780: end
Whether the dataset supports common table expressions (the WITH clause).
# File lib/sequel/dataset.rb, line 258 258: def supports_cte? 259: select_clause_order.include?(WITH_SUPPORTED) 260: end
Whether the dataset supports the DISTINCT ON clause, true by default.
# File lib/sequel/dataset.rb, line 263 263: def supports_distinct_on? 264: true 265: end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset.rb, line 278 278: def supports_is_true? 279: true 280: end
Whether the dataset supports window functions.
# File lib/sequel/dataset.rb, line 293 293: def supports_window_functions? 294: false 295: end
Returns a string in CSV format containing the dataset records. By default the CSV representation includes the column titles in the first line. You can turn that off by passing false as the include_column_titles argument.
This does not use a CSV library or handle quoting of values in any way. If any values in any of the rows could include commas or line endings, you shouldn‘t use this.
# File lib/sequel/dataset/convenience.rb, line 221 221: def to_csv(include_column_titles = true) 222: n = naked 223: cols = n.columns 224: csv = '' 225: csv << "#{cols.join(COMMA_SEPARATOR)}\r\n" if include_column_titles 226: n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA_SEPARATOR)}\r\n"} 227: csv 228: end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
# File lib/sequel/dataset/convenience.rb, line 234 234: def to_hash(key_column, value_column = nil) 235: inject({}) do |m, r| 236: m[r[key_column]] = value_column ? r[value_column] : r 237: m 238: end 239: end
Truncates the dataset. Returns nil.
# File lib/sequel/dataset.rb, line 298 298: def truncate 299: execute_ddl(truncate_sql) 300: end
SQL query to truncate the table
# File lib/sequel/dataset/sql.rb, line 788 788: def truncate_sql 789: if opts[:sql] 790: static_sql(opts[:sql]) 791: else 792: check_modification_allowed! 793: raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] 794: _truncate_sql(source_list(opts[:from])) 795: end 796: end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
DB[:items].union(DB).sql #=> "SELECT * FROM items UNION SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 821 821: def union(dataset, opts={}) 822: opts = {:all=>opts} unless opts.is_a?(Hash) 823: compound_clone(:union, dataset, opts) 824: end
Updates values for the dataset. The returned value is generally the number of rows updated, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 304 304: def update(values={}) 305: execute_dui(update_sql(values)) 306: end
Formats an UPDATE statement using the given values.
dataset.update_sql(:price => 100, :category => 'software') #=> "UPDATE items SET price = 100, category = 'software'"
Raises an error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb, line 847 847: def update_sql(values = {}) 848: opts = @opts 849: 850: return static_sql(opts[:sql]) if opts[:sql] 851: 852: check_modification_allowed! 853: 854: sql = "UPDATE #{source_list(@opts[:from])} SET " 855: set = if values.is_a?(Hash) 856: values = opts[:defaults].merge(values) if opts[:defaults] 857: values = values.merge(opts[:overrides]) if opts[:overrides] 858: # get values from hash 859: values.map do |k, v| 860: "#{[String, Symbol].any?{|c| k.is_a?(c)} ? quote_identifier(k) : literal(k)} = #{literal(v)}" 861: end.join(COMMA_SEPARATOR) 862: else 863: # copy values verbatim 864: values 865: end 866: sql << set 867: if where = opts[:where] 868: sql << " WHERE #{literal(where)}" 869: end 870: 871: sql 872: end
Add a condition to the WHERE clause. See filter for argument types.
dataset.group(:a).having(:a).filter(:b) # SELECT * FROM items GROUP BY a HAVING a AND b dataset.group(:a).having(:a).where(:b) # SELECT * FROM items WHERE b GROUP BY a HAVING a
# File lib/sequel/dataset/sql.rb, line 878 878: def where(*cond, &block) 879: _filter(:where, *cond, &block) 880: end
The SQL fragment for the given window‘s options.
# File lib/sequel/dataset/sql.rb, line 883 883: def window_sql(opts) 884: raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 885: window = literal(opts[:window]) if opts[:window] 886: partition = "PARTITION BY #{expression_list(Array(opts[:partition]))}" if opts[:partition] 887: order = "ORDER BY #{expression_list(Array(opts[:order]))}" if opts[:order] 888: frame = case opts[:frame] 889: when nil 890: nil 891: when :all 892: "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING" 893: when :rows 894: "ROWS UNBOUNDED PRECEDING" 895: else 896: raise Error, "invalid window frame clause, should be :all, :rows, or nil" 897: end 898: "(#{[window, partition, order, frame].compact.join(' ')})" 899: end
Add a simple common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
# File lib/sequel/dataset/sql.rb, line 911 911: def with(name, dataset, opts={}) 912: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 913: clone(:with=>(@opts[:with]||[]) + [opts.merge(:name=>name, :dataset=>dataset)]) 914: end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
# File lib/sequel/dataset/sql.rb, line 921 921: def with_recursive(name, nonrecursive, recursive, opts={}) 922: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 923: clone(:with=>(@opts[:with]||[]) + [opts.merge(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) 924: end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
dataset.with_sql('SELECT * FROM foo') # SELECT * FROM foo
# File lib/sequel/dataset/sql.rb, line 930 930: def with_sql(sql, *args) 931: sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 932: clone(:sql=>sql) 933: end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset.rb, line 314 314: def options_overlap(opts) 315: !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? 316: end
Whether this dataset is a simple SELECT * FROM table.
# File lib/sequel/dataset.rb, line 319 319: def simple_select_all? 320: o = @opts.reject{|k,v| v.nil?} 321: o.length == 1 && (f = o[:from]) && f.length == 1 && f.first.is_a?(Symbol) 322: end
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb, line 199 199: def to_prepared_statement(type, values=nil) 200: ps = clone 201: ps.extend(PreparedStatementMethods) 202: ps.prepared_type = type 203: ps.prepared_modify_values = values 204: ps 205: end