Firebase Data Connect offers multiple ways to interact with your Cloud SQL database:
- Native GraphQL: Define types in your
schema.gqland Data Connect translates your GraphQL operations into SQL. This is the standard approach, offering strong typing and schema-enforced structures. Most of the Data Connect documentation outside of this page discusses this option. When possible, you should use this method to take advantage of full type safety and tooling support. - The
@viewdirective: Define a GraphQL type inschema.gqlbacked by a customSELECTSQL statement. This is useful for creating read-only, strongly-typed views based on complex SQL logic. These types are queryable like regular types. See@view. - Native SQL: Embed SQL statements directly in named
operations in .
gqlfiles using special root fields. This provides maximum flexibility and direct control, especially for operations not easily expressed in standard GraphQL, leveraging database-specific features, or utilizing PostgreSQL extensions.
This guide focuses on the Native SQL option.
Common use cases for native SQL
While native GraphQL provides full type safety, and the @view directive offers
strongly-typed results for read-only SQL reports, native SQL provides the
flexibility needed for:
- PostgreSQL Extensions: Directly query and use any installed PostgreSQL
extension (such as
PostGISfor geospatial data) without needing to map complex types in your GraphQL schema. - Complex Queries: Execute intricate SQL with joins, subqueries, aggregations, window functions, and stored procedures.
- Data Manipulation (DML): Perform
INSERT, UPDATE, DELETEoperations directly. (However, don't use native SQL for Data Definition Language (DDL) commands. You must continue to make schema-level alterations using GraphQL to keep your backend and generated SDKs in sync.) - Database-Specific Features: Utilize functions, operators, or data types unique to PostgreSQL.
- Performance Optimization: Hand-tune SQL statements for critical paths.
Native SQL root fields
To write operations with SQL, use one of the following root fields of the
query or mutation types:
query fields
| Field | Description |
|---|---|
_select |
Executes a SQL query returning zero or more rows. Arguments:
Returns: a JSON array ( |
_selectFirst |
Executes a SQL query expected to return zero or one row. Arguments:
Returns: a JSON object ( |
mutation fields
| Field | Description |
|---|---|
_execute |
Executes a DML statement ( Arguments:
Returns: an
|
_executeReturning |
Executes a DML statement with a Arguments:
Returns: a JSON array ( |
_executeReturningFirst |
Executes a DML statement with a Arguments:
Returns: a JSON object ( |
Notes:
Operations are executed using the permissions granted to the Data Connect service account.
If you explicitly set the table name using the
@tabledirective (@table(name: "ExampleTable")), you must also enclose the table name in quotes in your SQL statements (SELECT field FROM "ExampleTable" ...).Without the quotation marks, Data Connect will convert the table name to snake case (
example_table).
Syntax rules & limitations
Native SQL enforces strict parsing rules to ensure security and prevent SQL injection. Be aware of the following constraints:
- Comments: Use block comments (
/* ... */). Line comments (--) are forbidden because they can truncate subsequent clauses (like security filters) during query concatenation. - Parameters: Use positional parameters (
$1,$2) that match theparamsarray order. Named parameters ($id,:name) are not supported. - Strings: Extended string literals (
E'...') and dollar-quoted strings ($$...$$) are supported. PostgreSQL Unicode escapes (U&'...') are not supported.
Parameters in comments
The parser ignores everything inside a block comment. If you comment out a line
containing a parameter (for example, /* WHERE id = $1 */), you must also
remove that parameter from the params list, or the operation will fail with
the error unused parameter: $1.
Examples
Example 1: Basic SELECT with field aliasing
You can alias the root field (for example, movies: _select) to make the client
response cleaner (data.movies instead of data._select).
queries.gql:
query GetMoviesByGenre($genre: String!, $limit:Int!) @auth(level: PUBLIC) {
movies: _select(
sql: """
SELECT id, title, release_year, rating
FROM movie
WHERE genre = $1
ORDER BY release_year DESC
LIMIT $2
""",
params: [$genre, $limit]
)
}
After running the query using a client SDK, the result will be in data.movies.
Example 2: Basic UPDATE
mutations.gql:
mutation UpdateMovieRating($movieId: UUID!, $newRating: Float!) @auth(level: NO_ACCESS) {
_execute(
sql: """
UPDATE movie
SET rating = $2
WHERE id = $1
""",
params: [$movieId, $newRating]
)
}
After running the mutation using a client SDK, the number of affected rows will
be in data._execute.
Example 3: Basic aggregation
queries.gql:
query GetTotalReviewCount @auth(level: PUBLIC) {
stats: _selectFirst(
sql: "SELECT COUNT(*) as total_reviews FROM \"Reviews\""
)
}
After running the query using a client SDK, the result will be in
data.stats.total_reviews.
Example 4: Advanced aggregation with RANK
queries.gql:
query GetMoviesRankedByRating @auth(level: PUBLIC) {
_select(
sql: """
SELECT
id,
title,
rating,
RANK() OVER (ORDER BY rating DESC) as rank
FROM movie
WHERE rating IS NOT NULL
LIMIT 20
""",
params: []
)
}
After running the query using a client SDK, the result will be in
data._select.
Example 5: UPDATE with RETURNING and Auth Context
mutations.gql:
mutation UpdateMyReviewText($movieId: UUID!, $newText: String!) @auth(level: USER) {
updatedReview: _executeReturningFirst(
sql: """
UPDATE "Reviews"
SET review_text = $2
WHERE movie_id = $1 AND user_id = $3
RETURNING movie_id, user_id, rating, review_text
""",
params: [$movieId,$newText,{_expr: "auth.uid" }]
)
}
After running the mutation using a client SDK, the updated post data will be in
data.updatedReview.
Example 6: Advanced CTE with upserts (atomic get-or-create)
This pattern is useful for ensuring dependent records (like Users or Movies) exist before inserting a child record (like a Review), all in a single database transaction.
mutations.gql:
mutation CreateMovieCTE($movieId: UUID!, $userId: UUID!, $reviewId: UUID!) {
_execute(
sql: """
WITH
new_user AS (
INSERT INTO "user" (id, username)
VALUES ($2, 'Auto-Generated User')
ON CONFLICT (id) DO NOTHING
RETURNING id
),
movie AS (
INSERT INTO movie (id, title, image_url, release_year, genre)
VALUES ($1, 'Auto-Generated Movie', 'https://placeholder.com', 2025, 'Sci-Fi')
ON CONFLICT (id) DO NOTHING
RETURNING id
)
INSERT INTO "Reviews" (id, movie_id, user_id, rating, review_text, review_date)
VALUES (
$3,
$1,
$2,
5,
'Good!',
NOW()
)
""",
params: [$movieId, $userId, $reviewId]
)
}
Example 7: Using Postgres extensions
Native SQL allows you to use Postgres extensions, such as PostGIS, without needing to map complex geometry types into your GraphQL schema or alter your underlying tables.
In this example, suppose your restaurant app has a table that stores location
data in a metadata JSON column (for example, {"latitude": 37.3688, "longitude": -122.0363}).
If you have enabled the PostGIS extension,
you can use standard Postgres JSON operators (->>) to extract these values on
the fly and pass them into the PostGIS ST_MakePoint function.
query GetNearbyActiveRestaurants($userLong: Float!, $userLat: Float!, $maxDistanceMeters: Float!) @auth(level: USER) {
nearby: _select(
sql: """
SELECT
id,
name,
tags,
ST_Distance(
ST_MakePoint((metadata->>'longitude')::float, (metadata->>'latitude')::float)::geography,
ST_MakePoint($1, $2)::geography
) as distance_meters
FROM restaurant
WHERE active = true
AND metadata ? 'longitude' AND metadata ? 'latitude'
AND ST_DWithin(
ST_MakePoint((metadata->>'longitude')::float, (metadata->>'latitude')::float)::geography,
ST_MakePoint($1, $2)::geography,
$3
)
ORDER BY distance_meters ASC
LIMIT 10
""",
params: [$userLong, $userLat, $maxDistanceMeters]
)
}
After running the query using a client SDK, the result will be in data.nearby.
Security best practices: dynamic SQL & stored procedures
Data Connect securely parameterizes all inputs at the GraphQL-to-database boundary, fully protecting your standard SQL queries from first-order SQL injection. However, if you use SQL to call custom Postgres stored procedures or functions that execute dynamic SQL, you must ensure your internal PL/pgSQL code handles these parameters securely.
If your stored procedure directly concatenates user inputs into an EXECUTE
string, it bypasses parameterization and creates a second-order SQL injection
vulnerability:
-- INSECURE: Do not concatenate parameters into dynamic strings!
CREATE OR REPLACE PROCEDURE unsafe_update(user_input TEXT)
LANGUAGE plpgsql AS $$
BEGIN
-- A malicious user_input (e.g., "val'; DROP TABLE users; --") will execute as code.
EXECUTE 'UPDATE target_table SET status = ''' || user_input || '''';
END;
$$;
To avoid this, follow these best practices:
- Use the
USINGclause: When writing dynamic SQL in your stored procedures, always use theUSINGclause to bind data parameters safely. - Use
format()for identifiers: Useformat()with the%Iflag for safe database identifier injection (like table names). - Strictly allow identifiers: Don't let client applications arbitrarily choose database identifiers. If your procedure requires dynamic identifiers, validate the input against a hardcoded allowlist inside your PL/pgSQL logic before execution.
-- SECURE: Use format() for identifiers and USING for data values
CREATE OR REPLACE PROCEDURE secure_update(target_table TEXT, new_value TEXT, row_id INT)
LANGUAGE plpgsql AS $$
BEGIN
-- Validate the dynamic table name against an allowlist
IF target_table NOT IN ('orders', 'users', 'inventory') THEN
RAISE EXCEPTION 'Invalid table name';
END IF;
-- Execute securely
EXECUTE format('UPDATE %I SET status = $1 WHERE id = $2', target_table)
USING new_value, row_id;
END;
$$;