-
Notifications
You must be signed in to change notification settings - Fork 33
Add JupyterLab support #41
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #41 +/- ##
=======================================
Coverage 96.61% 96.61%
=======================================
Files 3 3
Lines 59 59
Branches 5 5
=======================================
Hits 57 57
Misses 2 2 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor fixes needed, LGTM otherwise. Thank you @mdboom!
README.md
Outdated
## Server | ||
|
||
The server that communicates between the Jupyter server and Spark is the same | ||
regardless of the frontend used. It wueries the Spark UI service on the backend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
"@phosphor/disposable": "^1.1.2", | ||
"jquery": "^3.3.1", | ||
"bootstrap": "^4.1.1" | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
id: 'jupyter_spark', | ||
autoStart: true, | ||
activate: (app) => { | ||
let api_url = "/spark/api/v1"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the past we had trouble with the base path for the API, are you sure that's not the case here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the reminder. Indeed it looks like we have the same problem here. I'll fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be fixed now.
We'll want to deploy this to npm somehow -- ideally using a Travis deploy clause like we already do for PyPI. |
I came here looking for this exact feature - glad to see it already has a PR! Current look:
What I'm suggesting:
Is there also a progress bar at the cell that is running ? |
That's a good suggestion about the layout of the table. It shouldn't be too difficult to make that change. There isn't a progress bar at the cell that is running (unlike in the "classic notebook" version of this plugin), because if I understand correctly, the API for that isn't released yet (http://jupyterlab.readthedocs.io/en/stable/developer/notebook.html#the-ipywidgets-third-party-extension). But I honestly didn't do too much digging about it, so if there's an obvious way forward on that that isn't likely to change too much down the road, I'm game. |
Would be great to get this merged, any blockers? |
@elehcimd The problem is mostly making the table changes as @mdboom replied above and figure out a way how to deploy this to npm (and how that integrates with the production deployment of Jupyter). @mdboom Would you mind elaborating how we'd apply this to our Spark instance? Would we have to install Node and run the full NPM build to get it to run in our lab environment? |
@jezdez: Yes, Jupyter lab extensions have to go through a build process which involves a local copy of node. Details here |
gentle bump. would love to see some version of this integrated |
@jezdez @mdboom Thanks for this extension. However, if we actually start considering this solution on an enterprise level, where we have millions of applications running in multiple queues, even a first iteration of update of all applications is taking a lot of time and incase a lot of latency. Rather than creating a list of all application ids from the cluster, is there a workaround we can have in terms of the cache the plugin maintains ? Is it possible that we can take the application ids of only the lab .ipynb files that are opened up, and maintain a cache for that, rather than having a cache of the entire array of ids. Do let me know if we can do this. I happy to contribute, given some guidance . |
I made a PR here that updates this PR to work with modern jupyter (esp latest tornado). |
Fix #39.
This adds a Spark status window to the side pane in Jupyter Lab. I played with making a modal dialog like the old extension, but it feels like the side pane is more in keeping with the "JupyterLab way".
It looks like:
@teonbrooks, @jezdez