Python Forum
pyspark sql unable to recognize SQL query command
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
pyspark sql unable to recognize SQL query command
#6
micseydel,

Thanks for sticking with me on this issue.

I have made the code as easy as possible. The code now looks like this:

df = spark.read.csv('/home/packt/Downloads/Spark_DataFrames/HumanResources_Employee.csv',inferSchema=True,header=True)
df.registerTempTable("HumanResources_Employee")
df = spark.read.csv('/home/packt/Downloads/Spark_DataFrames/HumanResources_Employee.csv',inferSchema=True,header=True)
df.createOrReplaceTempView('HumanResources_Employee')
myresults = spark.sql("""SELECT TOP 20 * FROM HumanResources_Employee""")
myresults.show()
I do believe the TOP keyword should work with pyspark.sql

However, I'm getting the error:

Py4JJavaError: An error occurred while calling o19.sql.
: org.apache.spark.sql.catalyst.parser.ParseException: 
extraneous input '20' expecting {<EOF>, '(', ',', '.', '[', 'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IF', EQ, '<=>', '<>', '!=', '<', LTE, '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '^', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'USING', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', 'CURRENT_DATE', 'CURRENT_TIMESTAMP', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 11)

== SQL ==
SELECT TOP 20 * FROM HumanResources_Employee
-----------^^^
Any thoughts most welcomed

micseydel,

I re-read your post and I think I understand what you're saying. I changed the code to;

myresults = spark.sql("SELECT * FROM HumanResources_Employee LIMIT 20")

And it worked. Can you show me how to show 20 percent, instead of just 20 rows?
Reply


Messages In This Thread
RE: pyspark sql unable to recognize SQL query command - by cpatte7372 - Jul-31-2018, 08:20 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  python cant recognize PIP siubikYT 2 806 Jul-19-2023, 06:30 PM
Last Post: Lahearle
  PySpark Coding Challenge cpatte7372 4 6,164 Jun-25-2023, 12:56 PM
Last Post: prajwal_0078
  Pyspark dataframe siddhi1919 3 1,260 Apr-25-2023, 12:39 PM
Last Post: snippsat
  pyspark help lokesh 0 779 Jan-03-2023, 04:34 PM
Last Post: lokesh
  Is it possible to make a program recognize how many clicks it has had on the monitor? jao 0 1,174 Feb-25-2022, 06:31 PM
Last Post: jao
  How to iterate Groupby in Python/PySpark DrData82 2 2,911 Feb-05-2022, 09:59 PM
Last Post: DrData82
  PySpark Equivalent Code cpatte7372 0 1,284 Jan-14-2022, 08:59 PM
Last Post: cpatte7372
  My program won't recognize the filename. braingoblins 1 1,161 Jan-07-2022, 06:18 PM
Last Post: deanhystad
  Pyspark - my code works but I want to make it better Kevin 1 1,815 Dec-01-2021, 05:04 AM
Last Post: Kevin
  pyspark parallel write operation not working aliyesami 1 1,736 Oct-16-2021, 05:18 PM
Last Post: aliyesami

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020