SQL¶
Introduction¶
SQL nodes are nodes that are created based on a provided SQL SELECT statement. These statements are specified from within a .strm file.
Use a SQL node to:
- Join two or more queues together
- Create additional variables
- Perform aggregation
- Perform filtering
- Perform duplicate removal
SQL nodes determine input queue names through standard SQL SELECT statement
syntax, with output destinations specified using the mandatory INTO <queue name>
syntax. Any variables defined on the output queue but not included in the SELECT
statement will default to NaN
for numeric fields or empty strings for character
fields. While intermediate variables are supported, they trigger warning logs as
they aren't persisted. Node progress tracking is based on the outermost select query.
Tip
Using .strm files is the recommended practice. In order to create an SQL node within a .strm file, follow this format:
// Name your process
name sqlprocess
// read from the input file
in input.csv -> input
// create an SQL node
sql [insert SQL SELECT statement]
// write out the results
out output -> output.csv
Configuring the Default SQL Syntax Version¶
VOR supports two versions of SQL syntax, with version 2 being the default. However, you have the flexibility to configure the default version or use a mix of versions in your processes. Configuration can be set at three levels, listed below in order of precedence (from lowest to highest):
-
Global configuration
Contact your VOR administrator to configure the default SQL syntax version at the system-wide level. This will apply the specified version to all SQL nodes by default. -
Process level
Use thedefault-sql-version
flag to specify the version number when creating a process. This sets the default SQL syntax version for all SQL nodes within the process. -
Node level
Use the syntax_version option to configure the SQL syntax version for a specific SQL node. This overrides any global or process-level configurations for that node.
Optimizations¶
SQL queries are automatically optimized for better performance. To disable
these optimizations, use the disable-sql-optimization
flag with the
vor create process
command. This may be necessary if the default
optimizations cause unexpected behavior or compatibility issues.
Warning
- Disabling SQL optimization may impact the performance of your queries.
- Use this option only when necessary, as it can lead to slower query execution times.
Version Migration Guide¶
This guide provides information on breaking changes, deprecated features, and recommended actions to migrate to the latest version of the SQL node.
Key Changes¶
Standard SQL Join Syntax¶
Join conditions must now be specified in the ON
/USING
clause rather than
the WHERE
clause. The WHERE
clause should only contain filtering
conditions, not join criteria (except when using implicit joins).
Before (v1):
select input_csv1.*
from input_csv1
left join input_csv2
where input_csv1.id = input_csv2.id and value > 100
into output_join1;
After (v2):
select input_csv1.*
from input_csv1
inner join input_csv2
on (input_csv1.id = input_csv2.id)
where value > 100
into output_join1;
Implicit Joins Still Supported
Implicit joins with conditions in the WHERE clause continue to work:
select input_csv1.*
from input_csv1, input_csv2
where input_csv1.id = input_csv2.id and value > 100
Alias for Aggregates¶
All aggregation functions now require explicit aliases. Group by columns must be explicitly included in select statements.
Before (v1):
select class1, sum(value), avg(value), count(value)
from input
group by class1 into gboutput;
After (v2):
select class1,
sum(value) as value,
avg(value) as avg,
count(value) as count
from input
group by class1 into gboutput;
Standardized Set Operations¶
The syntax for EXCEPT
and other set
operations has been updated to align with ANSI SQL standards. See the
SET OPERATION section for detailed examples.
Explicit Column References¶
If a field name is present in multiple tables in a query and vor is unable to resolve the ambiguity, an error is thrown. To resolve this, prefix the column name with the table name.
Before (v1):
select id, value
from input join input2
on (input.id = input2.id)
into output
After (v2):
select input.id, input.value
from input join input2
on (input.id = input2.id)
into output
Direct Expression References¶
Expressions must be referenced directly rather than through aliases in the same SELECT statement.
Before (v1):
select cast(val1 as char) as value, val + 1 as val1
from input
into output;
After (v2):
select cast(val + 1 as char) as value, val + 1 as val1
from input
into output;
Simplified Query Structure¶
Queries now follow a more standardized format with a single ORDER BY
on the
outermost select statement in the query and always placed before the INTO
clause.
select * from input order by column1, column2 desc into output;
Migration Steps¶
Running vor create process
should automatically upgrade the SQL node to the
latest changes and any errors would be displayed in the console pointing to the
origin of the error.
$ vor create process ambigous_column.strm
select id, value from input join input2 on (input.id = input2.id) into output
^^
ERROR Syntax error at line 1, column 7: Ambiguous column id
If you have a large number of queries or complex queries and prefer to
migrate them gradually, you can use the default-sql-version
flag to build the
process with version 1. This allows you to maintain compatibility while testing
individual queries with the latest version by setting the syntax_version
option at the node level.
$ cat my_query.strm
name my_query
in input.csv -> input
in input2.csv -> input2
sql select * from input into passthrough;
sql select passthrough.id, value
from passthrough join input2
on (passthrough.id = input2.id)
into output; syntax_version=2
out output -> output.csv
$ vor create process my_query.strm --default-sql-version 1
Compatibility
- SQL v2 aims to be more SQL standard compliant.
- Both versions are supported in the current release and can coexist in the same process.
SQL Syntax Reference¶
Basic SELECT¶
Basic SELECT allows for copying one table into another table:
select *
from input_table into output_table;
Creating additional fields:
select *,
replace(id, “Hi”, “Bye”) as new
from input_table into output_table;
Printing to the log :
select *,
print("id:", id, "value:", value) as print,
printf("This should be positive %f", value) as printf
from input_table into output_table;
Conditional SELECT¶
Conditional values with IF statement:
select if lower(id) = "laura" then
"found Laura"
else
"Not Laura"
end as id
from input_csv1 into output;
Conditional values with case statement:
select case
when lower(id) = "daniel" then upper(id)
when lower(id) = "laura" then "found Laura"
else "Not Laura"
end as id
from input_csv1 into output_join1;
Filtered SELECT¶
Several basic examples:
select ron
from tbl_mine
where not isnull(ron) into Golf;
CAST
Nested CAST function example:
select *, CAST(CAST(num as char) as num)
from tbl_mine
select *, CAST(char AS date) as date
from tbl_tim
where date > date("January 1,1970") and
date > "2019/10/01"d into home;
Supported variable types for CAST function: int, num, char, date
CURRENT_DATE
select CURRENT_DATE as date
from tbl_tim;
CURRENT_TIMESTAMP
select CURRENT_TIMESTAMP as datetime
from tbl_tim;
DATE
select *
from tbl_tim
where date > date("January 1,1970") and
date > "2019/10/01"d into home;
LIKE
select *
from john_tbl
where name like 'Let\S*' into final;
IN
select *
from john_tbl
where lower(name) in ('john', 'paul', 'ringo','george') into final;
All of the types in the IN statement must match. Supported types include: int, num, char, and date.
DISTINCT
select distinct class1
from port
where not isnull(class1) into final;
DEDUP BY
select * from port
dedup by region into bar;
A DEDUP clause removes duplicates based on the selected keys.
GETDATE()
select GETDATE() as date
from tbl_tim;
GROUP BY¶
When working with numeric data in SQL SELECT - GROUP BY statements, SQL supports functions for aggregating the grouped records.
The following aggregations are supported when specified in a SQL SELECT - GROUP BY statement:
SUM()
-
The sum of the values excluding NULL, NaN. It is an error to select a non "NUM" column to sum
MIN()
-
The minimum of the values excluding NULL, NaN.
MAX()
-
The maximum of the values excluding NULL, NaN.
AVG()
-
The average of the values excluding NULL, NaN.
COUNT()
-
The count of the non-NULL values.
HAVING Clause¶
The HAVING clause is used in combination with the GROUP BY clause to filter the results of an aggregation. The HAVING clause is applied to each group of rows and only returns groups that meet the specified condition.
Note
Currently, aggregation functions are not supported in the HAVING
clause:
select sum(value) as sum
from input
group by id
having sum(value) > 100.0
into gboutput;
SELECT
statement with an
alias and use that alias in the HAVING
clause:
select sum(value) as sum
from input
group by id
having sum > 100.0
into gboutput;
Examples¶
Several basic examples:
select class1,
sum(value) as sum,
count(value) as count,
avg(value) as avg,
min(value) as min,
max(value) as max
from input
group by class1 into gboutput;
With a HAVING clause
select class1,
sum(value) as sum,
from input
group by class1
having sum > 100.0 into gboutput;
If a prediction queue is selected, the predictions will only be created for the SUM() terms.
select class1,
sum(value) as sum,
count(value) as count,
avg(value) as avg,
min(value) as min,
max(value) as max
from input group by class1 into gboutput;
Warning
SQL v2 requires explicit aliases for all aggregation functions to avoid ambiguity. For example, the following query would result in a syntax error:
select sum(value), count(value) from input group by id into gboutput;
select sum(value) as sum, count(value) as count from input group by id
into gboutput;
JOIN¶
Joins can be performed using any of the following methods:
- USING clause: This clause is used when the columns to be joined have the same name in both tables.
- ON clause: This clause allows for complex equality expressions. It is a
step-up from a
USING
clause as the field names in the joined files don't have to match. - Implicit Join: This type of join is performed by listing tables in the
FROM
clause separated by commas, and specifying the join condition in theWHERE
clause.
Warning
The WHERE
clause can only be used for implicit joins in SQL v2. If you
need the previous behavior, set the syntax_version
option to 1 in your stream file.
Types of Joins¶
- Inner Join: Returns records that have matching values in both tables.
- Left Join (or Left Outer Join): Returns all records from the left table, and the matched records from the right table. The result is zero values from the right side if there is no match.
- Right Join (or Right Outer Join): Returns all records from the right table, and the matched records from the left table. The result is zero values from the left side when there is no match.
- Full Join (or Full Outer Join): Returns all records when there is a match in either left or right table. Records not matching in either table will have zero values.
Handling Ambiguous Columns¶
When performing joins, it is common to encounter columns with the same name in both tables. To avoid ambiguity, you can use table aliases and fully qualify column names.
Using Table Aliases¶
Table aliases make your SQL queries more readable and help avoid column name conflicts. Here is an example:
select a.id, a.name, b.city
from customer as a
inner join
order as b
on (a.id = b.customer_id) into output_join1;
Fully Qualifying Column Names¶
Fully qualifying column names ensures that each column reference is unambiguous. Here is an example:
select customer.id, customer.name, order.date
from customer
inner join
order
on customer.id = order.customer_id into output_join1;
Best Practices¶
- The “join” table should be the smaller of the two tables or the first one available as it is processed completely before the joining starts. This will improve how fast results start appearing on the output queue and how much memory is consumed by the join.
- For join statements with only equality conditions and same-named columns, use
the
USING
clause for better performance.
Examples¶
USING clause
select *
from input_fact
left join
port_dim using (portfolio_sk,portfolio_nm)
left join
scenario_dim using (scenario_sk,scenario_nm,scenariotype_nm)
left join
geography_dim using (geo_sk,country_nm,country_code,region_nm,stateprovince_nm,stateprovince_code,city_nm)
left join
rpt_hier_dim using (geo_sk,portfolio_sk,scenario_sk,product_nm,productcategory_nm,instrumenttype_nm)
into fact_wt_new_port_sk;
ON clause
select input_csv1.class1, input_csv2.class2
from input_csv1
full join
input_csv2
on(input_csv2.class2 = input_csv1.id and input_csv2.id = input_csv1.class1 ) into output_join1;
Implicit Join
select input_csv1.* from input_csv1, input_csv2 where (input_csv1.id = input_csv2.id) into output_join1;
Using an EXCEPT clause (SQL v1 only)¶
Warning
The EXCEPT
functionality has changed
in SQL v2. If you need the previous behavior, set the syntax_version
option to 1 in your stream file.
An EXCEPT clause is used to filter out observations that match a particular input data set. It can be thought of as the opposite of a JOIN. In VOR Stream, EXCEPT uses a different syntax from standard SQL. There is no requirement to match variables in the select statements and VOR supports shortcut syntax. The basic syntax is exactly the same as a JOIN statement. Instead of JOIN the user would use EXCEPT. But unlike join there is no LEFT, RIGHT, FULL, or INNER EXCEPT. Here are some examples:
select * from input_csv1 except input_csv2 using( id ) into output_join1;
select * from input_csv1 except input_csv2 on( input_csv1.id = input_csv2.id ) into output_join1;
select * from (select * from input_csv1 except input_csv2 where input_csv1.id = input_csv2.id) into output_join1;
select stage, sum(ecl_stage) as sum_ecl
from port_staging except select * from prev_port_staging on(acct_id)
group by stage into originated_ecl;
SET OPERATION (SQL v2 only)¶
Set operations are used to combine the results of two or more SELECT statements. The following set operations are supported:
- UNION - Combines the results of two or more SELECT statements.
- UNION ALL - Combines the results of two or more SELECT statements, including duplicates.
- INTERSECT - Returns only the rows that appear in both result sets.
- EXCEPT - Returns only the rows that appear in the first result set but not in the second result set.
Operator Precedence¶
Set operations follow a specific precedence order when multiple operations are used together:
- INTERSECT has the highest precedence
- EXCEPT, UNION and UNION ALL have the same precedence
Operations with the same precedence are evaluated from left to right.
-- This query evaluates as: (table1 INTERSECT table2) UNION table3
select col1, col2, col3 from table1
intersect
select cola, colb, colc from table2
union
select colx, coly, colz from table3
INTO output;
To override the default precedence, use parentheses with subqueries:
-- This forces UNION to be evaluated before INTERSECT
select col1, col2, col3 from (
select col1, col2, col3 from table1
union
select cola, colb, colc from table2
)
intersect
select colx, coly, colz from table3
into output;
Important Considerations¶
-
Positional Columns: Set operations match columns by position, not by name. The first column in the first query is matched with the first column in the second query, and so on.
-
Column Names: The result set uses the column names from the first query.
-
Data Types: The data types of corresponding columns in each query should be compatible.
-
Avoid
SELECT *
: UsingSELECT *
in set operations can lead to unexpected results as the order of the columns are not guaranteed. Instead, explicitly list columns in the desired order:
-- Recommended approach
select col1, col2, col3 from table1
union
select cola, colb, colc from table2
into output;
Examples¶
The syntax for set operations is as follows:
select id, class1 from input1
union
select id, class2 from input2 into output;
select id, class1 from input1
union all
select id, class2 from input2 into output;
select id, class1 from input1
intersect
select id, class2 from input2 into output;
select id, class1 from input1
except
select id, class2 from input2 into output;
ORDER BY¶
The results of a query can be optionally sorted using the ORDER BY
clause.
The clause takes a list of sort expressions separated by commas. Each sort
expression can be followed by an optional ASC
or DESC
keyword to specify
the sort order. The default sort order is ASC
.
Examples¶
select * from input order by column1, column2 desc into output;
Output Data to a Dynamic Fact¶
From a SQL SELECT statement, a dynamic fact can be created instead of a table. To do this, use the setdyn/setfact statement on the SQL SELECT statement in the .strm file and then specify the dynamic fact name as the INTO destination.
An example .strm file entry is:
sql select distinct class1
from input_csv1
where class1="Oysterhive" into dynFact_case1; setdyn=dynFact_case1 name=dynFact_case1
Using WITH KEY¶
When using dynamic facts in SQL, there is an additional WITH KEY clause that can be used to specify a dictionary or map key to be created as part of the dynamic fact.
An example using WITH KEY is as follows:
sql select class1,
value,
id
from input_csv1 into dynFact_case3 with key class1; setdyn=dynFact_case3 name=dynFact_case3
Use Cases¶
There are 6 different cases for writing out dynamic facts from SQL:
- A single column is selected to be output and there is a single row in the
result table. The dynamic fact will be a single value. For Golang, cast
the
u.DynFactGet()
return as the column type (example:u.DynFactGet("dynFact_case1").(string)
) - A single column is selected to be output and there are multiple rows in the
result table. The dynamic fact will be an array of the column type. For
Golang, cast the
u.DynFactGet()
return as an array of the column type (example:u.DynFactGet("dynFact_case2").([]float64)
) - Two columns are selected and one of the columns is the WITH KEY and the key
is unique for each selected row. The dynamic
fact will be a map/dict with
map[key]=value
. For Golang, cast theu.DynFactGet()
return as an map[string] of the column type (example:u.DynFactGet("dynFact_case3").(map[string]float64)
) - Two columns are selected and one of the columns is the WITH KEY and the key
is not unique for each selected row. The dynamic
fact will be a map/dict with
map[key][index]=value
. For Golang, cast theu.DynFactGet()
return as an map[string] array of the column type (example:u.DynFactGet("dynFact_case3").(map[string][]float64)
) - More than two columns selected and one of the columns is the WITH KEY. The
dynamic fact will be a map/dict with
map[key]=(map[variableName]=value)
. For Golang, cast theu.DynFactGet()
return as an map of map of interfaces (example:fact4 := u.DynFactGet("dynFact_case4").(map[string]map[string]interface {})
) . Then to access a value from a numeric column, foo, with key equal to * mine* usefact4["mine"]["foo"].(float64)
Technically another case, if the key is not unique, the returned format will bemap[string][]map[string]interface {}
. - Two or more columns selected without a WITH KEY. The dynamic fact will be an
array of
map[variableName]=value
.
Here is an example of case three. Given an input file:
key,value
john,3.0
paul,77
ringo,-1.0
george,.005
and the following .strm entries:
in input_csv > input_csv1
sql select key,
value
from input_csv1 into dynFact_case3 with key key; setdyn=dynFact_case3 name=dynFact_case3
node readdyn( null )(null) getdyn=dynFact_case3
Add:
case3 := u.DynFactGet("dynFact_case3").(map[string]float64)
log.Println("Case3", case3, case3["ringo"])
to the init block of readdynU.go along with the import of "log"
. When the
program is run, it will produce:
INFO [2021-09-03 09:46:36] Case3 map[john:3.0 paul:77.0 ringo:-1.0 george:0.005] -1.0
Evaluating Run Time Expressions in SQL¶
It is often desirable to provide expressions at run time. For example, the WHERE
clause expression to use when filtering a portfolio can be selected at run time
instead of being static in the code. The evalbool()
function can be used to
compile and run the WHERE clause expression without having to modify the SQL
code.
select *
from inputPortfolio
where evalbool(charopt("IFRS9.port.where")) into selPort;
For evalbool()
, evalchar()
, and evalnum()
, the input string is an
expression. For performance reasons, that expression should be a constant but it
doesn't need to be. Expressions are mostly Golang syntax but with the following
differences:
- Variables are case-insensitive - but not function names
- Date literals are supported in the form of any permutation of RFC3339, ISO8601 standard - '2020-10-01' and '10jun2022'd
- % is used as a mod operator
- String constants and dates are encapsulated in single quotes. Dates can be appended with a 'd'.
- The IN operator is supported
- Ternary conditional operators are supported, i.e., "x > y ? 'yes' : 'no'"
- Regular expressions with the '=~' and the '!~' operators are supported. Uses Golang's standard flavor for regular expressions. The left side is candidate string and the right side is the pattern.
Because these are Golang expressions, SQL syntax like and, or, and not
are not supported. Instead, use &&, ||, and !. The functions that are
supported in an expression are the same functions that are supported in SQL with
the following exceptions:
boolopt()
, charopt()
, numopt()
are not supported. Variable values are
obtained from the input queue.
Working with Dates in SQL¶
The following are all valid date literals in SQL:
- "2021-01-01"d
- '01/01/1965'd
- "Q1 1990"D
- "02jan1965"d
Note that the first two dates are European formated dates: "dd/mm/yyyy",
"yyyy-mm-dd". The date()
function will parse a string as a date so it can
operate as a SQL field name or a string literal. The date()
function parses
for every observation processed, so it is more efficient to use a date literal
if possible.
SQL Functions¶
The following functions are supported:
Function | Description | Example |
---|---|---|
abs(x) | Return absolute value of a floating-point number | abs(-10) -> 10 |
boolopt(x) | Return Boolean option value from the process options in the joboptions.json file |
boolopt("IFRS9.models.useChampion") -> true |
cbrt(x) | Compute the cube root of x | cbrt(27) -> 3 |
current_date() | Return today’s date as a date value | current_date() |
current_timestamp() | Return current date and time as a datetime value | current_timestamp() |
ceil(x) | Return the closest integer >= x | ceil(1.2) -> 2 |
coalesce(x1, x2,x3…) | Return the first non-missing xi | coalesce(NaN, NaN, 3) -> 3 |
charopt(x) | Return string option value from the process options in the joboptions.json file |
charopt("IFRS9.models.championModel") -> Current_champ" |
date(date Literal) | Convert literal date into a date value | date( “03/06/1972”) -> d |
dateadd(interval, n intervals, date) | Add n intervals to a date type and return a date. n can be negative. Allowed intervals – year, quarter, month, week, day, weekday. The function supports an optional argument for date alignment: "begin", aligns to the beginning of the interval; "end", aligns to the end of the interval; "same", aligns to the exact amount specified by n. The date alignment option is available for year, quarter, month, and week. Note: the weekday and week functions are based on weeks that start on Sunday and end on Saturday, where Monday-Friday are weekdays | d <- 10/04/2020 newdate = dateadd(“quarter”, 3, d) -> 06/01/2021 newdate2 = dateadd("year", 1.0, d, "end") -> 12/31/2021 |
datediff(interval, date, date) | Computes the number of intervals between two dates. The number of intervals can be negative. The type of the return is a num. Allowed intervals - year, quarter, month, week, day, weekday. Note: the weekday and week functions are based on weeks that start on Sunday and end on Saturday, where Monday-Friday are weekdays | d1 <- 6/28/2021 d2 <- 8/5/2021 datediff("month", d1, d2) -> 2.0 |
dateopt(x) | Return date option value from the process options in the joboptions.json file |
dateopt("IFRS9.models.runDate") -> 04/10/1967 |
evalbool(expression) | Return the boolean result from evaluating an expression. | evalbool("expiry_date > '2025-10-01'") -> false |
evalchar(expression) | Return the character result from evaluating an expression. | evalchar("lower(instid) + '_' + poolID") -> instid76_376 |
evalnum(expression) | Return the numeric result from evaluating an expression. | evalnum("exp(2)") -> 7.38905609 |
exp(x) | Return ex | exp(2) -> 7.38905609 |
floor(x) | Return the closest integer <= x | floor(1.2) -> 1 |
getdate() | Return today’s date as a datetime value | getdate() |
if(bool,then,else) | Evaluates a boolean expression, and executes one expression if the boolean is true, and executes a different expression if false | if(n1>n2, n1-n2, n1+n2) -> subtracts n2 from n1 if true, and adds the numbers together if false |
variable in(s1, s2...) | Returns true if the value of the variable is one of the strings in the list. Note: this a SELECT statement in the IN statement is not supported | company <- "FRG" company IN("FRG", "SAS") -> true |
isnull(x) | Returns true if the argument passed in is null. For the following data types, these are the values that are considered as null: string - empty string, date - 1/1/1970, num - NaN | num <- 6.4 isnull(num) -> false |
len(s) | Return the length of a string | len(“Hi There”) -> 8 |
variable like "regular expression" | Returns true if the value of the variable satisfies the regular expression (See Golang - regular expressions) | Company <- "Financial Risk Group" Company like "F*" -> true |
ln(x) | Compute the natural log of x | ln(7.38905609) -> 2 |
log(x) | Compute the log base 10 of x | log(7.38905609) -> 0.8685889 |
lower(s) | Lower case the string s | lower(“Hi There”) -> hi there |
max(x1, x2, ...) | Returns the maximum number from the list provided. Note: at least one argument must be passed | max(1, 2, 3) -> 3 |
min(x1, x2, ...) | Returns the minimum number from the list provided. Note: at least one argument must be passed | min(1, 2, 3) -> 1 |
mod(x, y) | Modulus operator | mod(3, 2) -> 1 |
numopt(x) | Return number option value from the process options in the joboptions.json file |
numopt("IFRS9.models.multiplier") -> 10 |
pi() | Returns the value of PI | pi() -> 3.1415926535897932 |
power(x, y) | Compute xy | power(2, 16) -> 65536 |
print(s1, s2...) | Prints all arguments to terminal with spaces between arguments. There is no required datatype for arguments | print(1.0, "hello", 3.41) -> 1.0 hello 3.41 |
printf(s, any...) | Prints a formatted string to the terminal. The first argument is a format string, and the remaining arguments are the values to be formatted. | printf("This should be positive %f", 3.41) -> This should be positive 3.410000 |
replace(s1, s2, s3) | Replace all instances of s2 with s3 in the string s1 | replace(“Hi there”, “Hi”, “Bye”) -> Bye there |
round(x) | Return the closest integer to x | round(3.2) -> 3 |
sign() | Returns -1 if x is negative else return 1 | sign(-20) -> -1 |
sqrt(x) | Compute the square root of x | sqrt(9) -> 3 |
substring(s, start, length) | Return a substring of s, starting at start and next length characters. Indexes are zero based | substring(“Hi there”, 0, 2) -> Hi |
trim(input, cutset) | Trims the input string of the letters provided in the cutset. If no cutset is provided, the function trims off spaces | trim("--Hello^^^", "-^") -> "Hello" trim("--Hello^^^") -> "--Hello^^^" |
trunc(x) | Returns the integer value of x | trunc(7.3) -> 7 |
upper(s) | Upper case string s | upper(“Hi there”) -> HI THERE |
Job Options¶
An example joboptions.json
file for the boolopt()
,charopt()
, dateopt()
, and numopt()
functions is:
sql select date, dateadd("qtr",numopt('IFRS9.models.multiplier'),dateopt('IFRS9.models.runDate')) as qtr1 from functions2 into test2;
name=optiontest
{
"processoptions": {
"IFRS9": {
"models": {
"championModel": "Current_champ",
"multiplier": 10,
"runDate": "04/10/1967",
"useChampion": true
}
}
}
}