mirror of
https://github.com/stellarshenson/stellars-jupyterhub-ds.git
synced 2026-03-07 21:50:28 +00:00
docs: add comprehensive documentation and update screenshots
Added three new documentation files following modus primaris style: - doc/ui-template-customization.md: Guide for extending JupyterHub UI templates with RequireJS, Bootstrap 5, CSRF protection, and custom handlers - doc/docker-socket-permissions.md: Docker socket access control documentation covering group-based permissions, security implications, and best practices - doc/notifications.md: Complete notification broadcast system documentation including implementation details, API integration, error handling, and troubleshooting Updated UI screenshots in README.md: - Replaced screenshot-restart-server.png with screenshot-home.png showing complete user control panel (restart server + volume management) - Added screenshot-send-notification.png showing admin notification broadcast interface with message composer, type selector, and delivery results All documentation follows consistent structure: brief overview, key facts in bullet points, explanatory narrative, and technical specifications without excessive nesting or marketing language.
This commit is contained in:
BIN
.resources/screenshot-home.png
Executable file
BIN
.resources/screenshot-home.png
Executable file
Binary file not shown.
|
After Width: | Height: | Size: 43 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 29 KiB |
BIN
.resources/screenshot-send-notification.png
Executable file
BIN
.resources/screenshot-send-notification.png
Executable file
Binary file not shown.
|
After Width: | Height: | Size: 66 KiB |
@@ -47,8 +47,11 @@ Users access JupyterHub through Traefik reverse proxy with TLS termination. Afte
|
||||
|
||||
## User Interface
|
||||
|
||||

|
||||
*Restart running JupyterLab container directly from the user control panel*
|
||||

|
||||
*User control panel with server restart and volume management options*
|
||||
|
||||

|
||||
*Admin panel for broadcasting notifications to all active JupyterLab servers*
|
||||
|
||||

|
||||
*Access volume management when server is stopped*
|
||||
|
||||
185
doc/docker-socket-permissions.md
Normal file
185
doc/docker-socket-permissions.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Docker Socket Permissions and Access Control
|
||||
|
||||
The JupyterHub platform requires Docker socket access for spawning and managing user containers. Additionally, trusted users can be granted Docker socket access within their JupyterLab environments for container orchestration tasks.
|
||||
|
||||
**Security Context:**
|
||||
- Docker socket (`/var/run/docker.sock`) provides root-equivalent access to the host system
|
||||
- Any process with socket access can create privileged containers, mount host directories, and execute commands as root
|
||||
- JupyterHub container requires socket access for DockerSpawner functionality
|
||||
- User containers optionally receive socket access based on group membership
|
||||
- Socket access should only be granted to fully trusted administrators and users
|
||||
|
||||
## JupyterHub Container Socket Access
|
||||
|
||||
The JupyterHub container requires read-write access to Docker socket for core functionality. The `compose.yml` file mounts the socket directly into the container:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
jupyterhub:
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:rw
|
||||
```
|
||||
|
||||
This mount enables DockerSpawner to create user containers, manage volumes, and control container lifecycle. The JupyterHub process runs as root inside its container and has unrestricted Docker daemon access. This design is intentional - DockerSpawner needs this level of access to dynamically spawn isolated environments per user.
|
||||
|
||||
**What JupyterHub Does with Socket Access:**
|
||||
- Creates user JupyterLab containers with `docker run` equivalent commands
|
||||
- Manages user-specific Docker volumes (home, workspace, cache)
|
||||
- Removes containers when users stop their servers
|
||||
- Restarts user containers for server restart functionality
|
||||
- Deletes volumes when users reset their storage through volume management feature
|
||||
|
||||
## User Container Socket Access
|
||||
|
||||
Users can optionally receive Docker socket access within their spawned JupyterLab containers. This enables advanced workflows like building Docker images, running additional containers for development, or deploying services from within JupyterLab.
|
||||
|
||||
Access is controlled through JupyterHub's built-in group system. Only users added to the `docker-privileged` group receive socket access. The group is created automatically on platform startup and protected from deletion.
|
||||
|
||||
**Implementation via Pre-Spawn Hook:**
|
||||
|
||||
The platform uses a pre-spawn hook in `jupyterhub_config.py` to conditionally mount the Docker socket:
|
||||
|
||||
```python
|
||||
async def pre_spawn_hook(spawner):
|
||||
user = spawner.user
|
||||
|
||||
# Check if user is in docker-privileged group
|
||||
if any(group.name == 'docker-privileged' for group in user.groups):
|
||||
spawner.extra_host_config = {
|
||||
'binds': {
|
||||
'/var/run/docker.sock': {
|
||||
'bind': '/var/run/docker.sock',
|
||||
'mode': 'rw'
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This hook executes before every container spawn. Users outside the group spawn without socket access. Group membership changes require stopping and restarting the user's server to take effect.
|
||||
|
||||
## Managing Privileged Users
|
||||
|
||||
Administrators control Docker socket access through the JupyterHub admin panel at `/hub/admin`. The interface provides group management under the "Groups" section.
|
||||
|
||||
**Granting Socket Access:**
|
||||
1. Admin logs into JupyterHub
|
||||
2. Navigate to Admin Panel
|
||||
3. Click "Groups" in the top menu
|
||||
4. Select `docker-privileged` group
|
||||
5. Add usernames to the group member list
|
||||
6. Save changes
|
||||
|
||||
Users must restart their JupyterLab server for group membership changes to apply. The next spawn will include or exclude Docker socket based on current group membership.
|
||||
|
||||
**Revoking Socket Access:**
|
||||
1. Remove user from `docker-privileged` group in admin panel
|
||||
2. Instruct user to stop their server
|
||||
3. User starts server again without socket access
|
||||
|
||||
## Built-in Group Protection
|
||||
|
||||
The `docker-privileged` group is a built-in protected group that cannot be permanently deleted. The platform defines built-in groups in `jupyterhub_config.py`:
|
||||
|
||||
```python
|
||||
BUILTIN_GROUPS = ['docker-privileged']
|
||||
```
|
||||
|
||||
A startup script reads this list and creates missing groups before JupyterHub launches. Additionally, the pre-spawn hook recreates the group if deleted during runtime. This ensures the security model cannot be accidentally bypassed by deleting the group.
|
||||
|
||||
**Adding New Built-in Groups:**
|
||||
|
||||
To add another protected group (e.g., for different permission levels), edit only the `BUILTIN_GROUPS` list in `jupyterhub_config.py`:
|
||||
|
||||
```python
|
||||
BUILTIN_GROUPS = ['docker-privileged', 'gpu-access', 'shared-admin']
|
||||
```
|
||||
|
||||
The startup script automatically handles creation without requiring changes to the script itself. This follows the DRY (Don't Repeat Yourself) principle with a single source of truth.
|
||||
|
||||
## Security Implications
|
||||
|
||||
Granting Docker socket access to user containers introduces significant security considerations. Users with socket access can effectively become root on the host system through various attack vectors.
|
||||
|
||||
**What Users Can Do with Socket Access:**
|
||||
|
||||
- Create privileged containers with `--privileged` flag
|
||||
- Mount any host directory into containers including `/`, `/etc`, `/root`
|
||||
- Run containers as root with full capabilities
|
||||
- Access other users' container filesystems and volumes
|
||||
- Read sensitive files from host (SSH keys, password files, application secrets)
|
||||
- Install kernel modules or modify system configuration
|
||||
- Escape container isolation entirely
|
||||
|
||||
This is not a vulnerability - it is the expected behavior of Docker socket access. The socket provides the same level of access as running commands as root directly on the host. Therefore, only grant access to users who already have legitimate root/sudo access or equivalent trust level.
|
||||
|
||||
## Use Cases for Privileged Access
|
||||
|
||||
Despite the security implications, Docker socket access enables powerful legitimate workflows for advanced users and administrators.
|
||||
|
||||
**Valid Use Cases:**
|
||||
- Building custom Docker images within JupyterLab for experimentation
|
||||
- Running development databases or services alongside notebooks
|
||||
- Creating isolated test environments for CI/CD prototyping
|
||||
- Container orchestration development and testing
|
||||
- Infrastructure as Code development with Docker-based tools
|
||||
- Teaching and demonstrating Docker concepts in educational settings
|
||||
|
||||
The platform is designed for environments where users already have high trust levels, such as internal data science teams, research labs, or development environments where users would have SSH access anyway.
|
||||
|
||||
## Auditing Socket Usage
|
||||
|
||||
Docker daemon logs all commands received through the socket. These logs appear in the host system logs and can be monitored for suspicious activity.
|
||||
|
||||
**Viewing Docker Daemon Logs:**
|
||||
```bash
|
||||
# On Ubuntu/Debian with systemd
|
||||
sudo journalctl -u docker.service -f
|
||||
|
||||
# Or check Docker events
|
||||
docker events
|
||||
```
|
||||
|
||||
Organizations requiring strict audit trails should configure Docker daemon logging to forward to a centralized logging system. This enables security teams to detect and respond to misuse of Docker socket access.
|
||||
|
||||
## Alternative: Docker-in-Docker (DinD)
|
||||
|
||||
An alternative to mounting the host Docker socket is running Docker daemon inside user containers (Docker-in-Docker). This provides container isolation between user Docker operations and the host Docker daemon.
|
||||
|
||||
**DinD Tradeoffs:**
|
||||
- Better isolation - user cannot access host containers or volumes
|
||||
- Worse performance - additional daemon overhead per user
|
||||
- Complex networking - requires privileged mode anyway for inner daemon
|
||||
- Storage overhead - each user needs separate image cache
|
||||
- Still requires privileged mode - security benefit is limited
|
||||
|
||||
This platform uses direct socket mounting for simplicity and performance. The group-based access control provides sufficient security for the target use case of trusted data science teams.
|
||||
|
||||
## Disabling Privileged Access
|
||||
|
||||
To completely disable user Docker socket access, remove the pre-spawn hook logic from `jupyterhub_config.py`:
|
||||
|
||||
```python
|
||||
# Comment out or remove the pre_spawn_hook function entirely
|
||||
# async def pre_spawn_hook(spawner):
|
||||
# ...
|
||||
```
|
||||
|
||||
This prevents all users from receiving socket access regardless of group membership. The JupyterHub container itself still requires socket access for DockerSpawner functionality - this cannot be disabled without fundamentally changing the spawner type.
|
||||
|
||||
## Recommendations
|
||||
|
||||
**For Production Environments:**
|
||||
- Only grant `docker-privileged` membership to administrators and DevOps personnel
|
||||
- Document which users have socket access and why
|
||||
- Regularly audit group membership through admin panel
|
||||
- Enable Docker daemon logging and monitor for suspicious activity
|
||||
- Consider separate JupyterHub instance for privileged users if many non-privileged users exist
|
||||
- Ensure host system is hardened with latest security updates
|
||||
|
||||
**For Development/Research Environments:**
|
||||
- Docker socket access can be granted more liberally to research teams
|
||||
- Focus on data loss prevention rather than privilege escalation prevention
|
||||
- Regular backups are more critical than access restrictions
|
||||
- Clear usage policies help prevent accidental damage more than technical controls
|
||||
|
||||
Docker socket access is a powerful feature that enables advanced workflows while introducing security considerations. Understanding the implications helps administrators make informed decisions about access control for their specific environment.
|
||||
249
doc/notifications.md
Normal file
249
doc/notifications.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Admin Notification Broadcast System
|
||||
|
||||
The platform provides a notification broadcast system allowing administrators to send messages to all active JupyterLab servers simultaneously. This feature is useful for maintenance announcements, system updates, or urgent communications to all users.
|
||||
|
||||
**Access Requirements:**
|
||||
- Admin-only feature accessible at `/hub/notifications`
|
||||
- Target JupyterLab servers must have `jupyterlab_notifications_extension` installed
|
||||
- Servers must be running (active spawners only)
|
||||
|
||||
## User Interface
|
||||
|
||||
The notification panel presents a simple form for composing and broadcasting messages to all active users.
|
||||
|
||||
**Form Fields:**
|
||||
- Message textarea with 140-character limit and live counter (Twitter-style brevity)
|
||||
- Notification type dropdown: default, info, success, warning, error, in-progress
|
||||
- Auto-close checkbox: when unchecked, notifications persist until dismissed manually
|
||||
- Send button: broadcasts to all active servers concurrently
|
||||
|
||||
The interface displays broadcast results immediately after sending, showing success and failure counts with expandable per-user details table listing username, delivery status, and error messages for failed deliveries.
|
||||
|
||||
## Notification Types and Visual Styling
|
||||
|
||||
The extension supports six notification types with distinct visual appearances to convey message urgency and context.
|
||||
|
||||
**Available Types:**
|
||||
- **default** - Neutral gray styling for general announcements
|
||||
- **info** - Blue styling for informational messages (system updates, tips)
|
||||
- **success** - Green styling for positive confirmations (maintenance complete)
|
||||
- **warning** - Yellow/amber styling for important notices (scheduled downtime)
|
||||
- **error** - Red styling for critical alerts (service disruptions)
|
||||
- **in-progress** - Animated styling for ongoing operations (deployment in progress)
|
||||
|
||||
Users see notifications as toast popups in their JupyterLab interface with the appropriate color scheme. Each notification includes a "Dismiss" button for manual closure regardless of auto-close setting.
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
The broadcast system uses asynchronous concurrent delivery to all active servers. When an admin submits a notification, the backend queries all running JupyterLab spawners from the JupyterHub database and sends HTTP POST requests to each server's notification endpoint.
|
||||
|
||||
**Delivery Process:**
|
||||
1. Admin submits notification form at `/hub/notifications`
|
||||
2. Backend validates message length and notification type
|
||||
3. System queries database for all active user spawners
|
||||
4. For each active server, generates temporary API token (5-minute expiry)
|
||||
5. Constructs endpoint URL using spawner's `server.base_url` property
|
||||
6. Sends concurrent POST requests to all servers using `asyncio.gather()`
|
||||
7. Each request has 5-second timeout for connection and response
|
||||
8. Aggregates results and returns success/failure counts to admin
|
||||
|
||||
The concurrent delivery ensures broadcast completes quickly even with many active users. A 100-user broadcast completes in approximately 5 seconds rather than 500 seconds with sequential delivery.
|
||||
|
||||
## API Endpoint Integration
|
||||
|
||||
Notifications are delivered to the JupyterLab extension's ingest endpoint at `/jupyterlab-notifications-extension/ingest`. The platform constructs the full URL dynamically based on each user's server configuration.
|
||||
|
||||
**Endpoint URL Construction:**
|
||||
```python
|
||||
container_url = f"http://jupyterlab-{username}:8888"
|
||||
base_url = spawner.server.base_url # e.g., /jupyterhub/user/konrad/
|
||||
endpoint = f"{container_url}{base_url}jupyterlab-notifications-extension/ingest"
|
||||
```
|
||||
|
||||
This approach handles different JupyterHub configurations automatically. The base URL varies depending on Traefik routing, reverse proxy setup, and JupyterHub URL prefix settings. Using the spawner's `server.base_url` property ensures correct routing in all configurations.
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"message": "Maintenance scheduled for 10 PM tonight",
|
||||
"type": "warning",
|
||||
"autoClose": false,
|
||||
"actions": [
|
||||
{
|
||||
"label": "Dismiss",
|
||||
"caption": "Close this notification",
|
||||
"displayType": "default"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The payload includes the message text, notification type, auto-close behavior, and action buttons array. The "Dismiss" button is included in all notifications for manual closure.
|
||||
|
||||
## Authentication and Security
|
||||
|
||||
Each notification request requires authentication via temporary API token. The platform generates a new token for each broadcast event with 5-minute expiry time.
|
||||
|
||||
**Token Generation:**
|
||||
```python
|
||||
token = user.new_api_token(note="notification-broadcast", expires_in=300)
|
||||
```
|
||||
|
||||
This approach provides several security benefits:
|
||||
- Tokens expire quickly limiting exposure window if intercepted
|
||||
- Each broadcast gets unique tokens preventing replay attacks
|
||||
- Tokens are never logged or persisted beyond the broadcast operation
|
||||
- Failed deliveries don't leave valid tokens in logs or error messages
|
||||
|
||||
The HTTP request includes the token in the Authorization header:
|
||||
```
|
||||
Authorization: Bearer <token>
|
||||
```
|
||||
|
||||
JupyterLab server validates the token against the user's stored tokens before processing the notification. Invalid or expired tokens result in 401 Unauthorized errors logged as authentication failures.
|
||||
|
||||
## Error Handling and Logging
|
||||
|
||||
The broadcast system implements comprehensive error handling for various failure scenarios. Each server delivery can succeed or fail independently without affecting other deliveries.
|
||||
|
||||
**Common Failure Scenarios:**
|
||||
- **Connection timeout** - Server not responding, likely stopped or network issue
|
||||
- **404 Not Found** - Notification extension not installed on JupyterLab server
|
||||
- **401/403 Authentication** - Token validation failed, usually configuration issue
|
||||
- **500 Server error** - Extension error processing notification
|
||||
|
||||
Each delivery result is logged with a concise one-line entry showing username, message preview (first 50 characters), notification type, and outcome:
|
||||
|
||||
```
|
||||
[I] [Notification] alice: 'Maintenance scheduled for 10 PM tonight' (warning) - SUCCESS
|
||||
[W] [Notification] bob: 'System update in progress' (info) - FAILED: HTTP 404: Not Found
|
||||
[E] [Notification] charlie: 'Critical security patch' (error) - ERROR: Server not responding
|
||||
```
|
||||
|
||||
These logs appear in JupyterHub container logs accessible via `docker logs stellars-jupyterhub-ds-jupyterhub`. The log format enables quick scanning for delivery issues and troubleshooting failed broadcasts.
|
||||
|
||||
## Extension Dependency
|
||||
|
||||
The notification system requires `jupyterlab_notifications_extension` installed on all spawned JupyterLab servers. The platform does not install this extension - it must be included in the `stellars/stellars-jupyterlab-ds` Docker image or installed via user environment.
|
||||
|
||||
**Extension Repository:** https://github.com/stellarshenson/jupyterlab_notifications_extension
|
||||
|
||||
The extension provides a server endpoint for ingesting notifications and a frontend component for displaying them in JupyterLab. Notifications appear as toast popups in the top-right corner with persistence in a notification center panel.
|
||||
|
||||
**Verifying Extension Installation:**
|
||||
|
||||
Check if the extension is installed and enabled on a running JupyterLab server:
|
||||
```bash
|
||||
docker exec jupyterlab-konrad bash -c \
|
||||
"conda run -n base jupyter server extension list | grep notification"
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
jupyterlab_notifications_extension enabled
|
||||
- Validating jupyterlab_notifications_extension...
|
||||
jupyterlab_notifications_extension 1.0.14 OK
|
||||
```
|
||||
|
||||
Missing or disabled extensions result in 404 errors when attempting notification delivery. The broadcast results table indicates which users received notifications successfully and which failed due to missing extensions.
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
**Message Length:**
|
||||
|
||||
The 140-character limit encourages concise, actionable messages. Longer messages get truncated or may not display properly in the JupyterLab interface. Focus on essential information and include links for details if needed.
|
||||
|
||||
**Good:** "Planned maintenance tonight 10 PM - 2 AM. Save work and stop servers. Details: example.com/maint"
|
||||
**Bad:** "We will be performing scheduled maintenance activities on the infrastructure tonight starting at approximately 10 PM and continuing until around 2 AM the next morning during which time you should make sure to save all your work and stop your servers to avoid any potential data loss or interruption to your research activities..."
|
||||
|
||||
**Notification Type Selection:**
|
||||
|
||||
Choose notification types based on message urgency and required user action:
|
||||
- **info** for routine announcements (new features, documentation updates)
|
||||
- **warning** for actions users should take (save work before maintenance)
|
||||
- **error** for critical issues requiring immediate attention (security patches)
|
||||
- **success** for confirmations (maintenance complete, system restored)
|
||||
|
||||
**Auto-Close Behavior:**
|
||||
|
||||
Leave auto-close unchecked for important messages requiring acknowledgment. Users must manually dismiss the notification, ensuring they see the message. Enable auto-close for low-priority informational messages that don't require explicit acknowledgment.
|
||||
|
||||
**Timing Considerations:**
|
||||
|
||||
Send notifications when most users are actively working for maximum visibility. Notifications sent outside working hours may be dismissed automatically or missed entirely by users who don't check notification history.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Notification panel shows "No active servers found":**
|
||||
|
||||
No users currently have running JupyterLab servers. Users must have active sessions for broadcast delivery. Check admin panel user list and verify which users have running servers before broadcasting.
|
||||
|
||||
**All deliveries fail with "Server not responding":**
|
||||
|
||||
JupyterLab containers may not be reachable from JupyterHub container. Verify network connectivity:
|
||||
```bash
|
||||
docker exec stellars-jupyterhub-ds-jupyterhub ping -c 3 jupyterlab-konrad
|
||||
```
|
||||
|
||||
Also check JupyterHub network configuration and ensure spawned containers join `jupyterhub_network`.
|
||||
|
||||
**Deliveries fail with "Notification extension not installed":**
|
||||
|
||||
The JupyterLab image lacks `jupyterlab_notifications_extension`. Update the `stellars/stellars-jupyterhub-ds` image to include the extension or install it in user environments. Rebuild and restart affected JupyterLab servers.
|
||||
|
||||
**Some users receive notifications, others don't:**
|
||||
|
||||
Check per-user delivery details in the expandable results table. Different failure reasons indicate different issues:
|
||||
- 404 errors suggest extension not installed on specific user's image version
|
||||
- Timeout errors suggest specific user's container networking issue
|
||||
- 401 errors suggest token generation or validation problem for specific user
|
||||
|
||||
Individual failures don't prevent successful delivery to other users. The broadcast system is fault-tolerant by design.
|
||||
|
||||
**Notifications appear but wrong styling:**
|
||||
|
||||
Verify notification type selection in the form. The extension maps types to colors - using "info" when intending "error" results in blue notification instead of red. Check logs for the notification type actually sent.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
The broadcast system is designed for efficient delivery to large numbers of concurrent users.
|
||||
|
||||
**Scalability Metrics:**
|
||||
- 10 active users: ~5 second broadcast time (parallel delivery)
|
||||
- 50 active users: ~5 second broadcast time (parallel delivery with concurrency)
|
||||
- 100 active users: ~5-6 second broadcast time (timeout limits total duration)
|
||||
- 500 active users: ~5-6 second broadcast time (asyncio.gather handles concurrency)
|
||||
|
||||
Delivery time is primarily determined by the timeout setting (5 seconds) rather than user count due to concurrent request handling. Even with hundreds of users, total broadcast duration remains under 10 seconds.
|
||||
|
||||
**Resource Utilization:**
|
||||
|
||||
Each concurrent delivery creates a TCP connection and HTTP request. A 100-user broadcast creates 100 simultaneous connections from JupyterHub container. This is well within typical system limits but may trigger rate limiting or connection limits on restricted networks.
|
||||
|
||||
The platform generates temporary API tokens for all users simultaneously. Token generation is lightweight but does query the database. Very large user counts (1000+) may benefit from batch token generation to reduce database load.
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Potential improvements for the notification broadcast system:
|
||||
|
||||
**Selective Targeting:**
|
||||
- Broadcast to specific groups instead of all users
|
||||
- User selection interface for targeted notifications
|
||||
- Group-based notification permissions
|
||||
|
||||
**Scheduled Notifications:**
|
||||
- Queue notifications for future delivery
|
||||
- Recurring notifications for regular maintenance windows
|
||||
- Time-zone aware scheduling for distributed teams
|
||||
|
||||
**Notification History:**
|
||||
- Database persistence of broadcast history
|
||||
- Admin view of previously sent notifications
|
||||
- User notification inbox for viewing past messages
|
||||
|
||||
**Rich Content:**
|
||||
- Markdown formatting support in messages
|
||||
- Embedded links and action buttons
|
||||
- File attachments or image previews
|
||||
|
||||
These enhancements would require modifications to both the broadcast system and the JupyterLab extension. The current implementation provides a solid foundation for basic admin-to-user communication needs.
|
||||
131
doc/ui-template-customization.md
Normal file
131
doc/ui-template-customization.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# UI Template Customization
|
||||
|
||||
JupyterHub's web interface can be customized by extending base templates. This platform customizes the user control panel to add self-service features like server restart and volume management directly accessible from the home page.
|
||||
|
||||
**Key Implementation Details:**
|
||||
- Custom templates placed in `services/jupyterhub/templates/`
|
||||
- Templates extend JupyterHub's base templates using Jinja2 `{% extends "page.html" %}`
|
||||
- Docker image copies templates to `/srv/jupyterhub/templates/` at build time
|
||||
- JupyterHub automatically discovers custom templates in this directory
|
||||
- Changes require Docker rebuild and container restart to take effect
|
||||
|
||||
## Template Structure
|
||||
|
||||
JupyterHub uses Jinja2 templating engine with a hierarchical template system. The base template `page.html` defines the overall page structure including navigation, headers, and content blocks. Custom templates extend this base and override specific blocks to add new functionality.
|
||||
|
||||
The `home.html` template is the user control panel where users manage their JupyterLab server. This platform extends it to add custom buttons for server restart and volume management operations. The template must preserve existing JupyterHub functionality while adding new features.
|
||||
|
||||
**Template Blocks Available:**
|
||||
- `{% block main %}` - Main content area for page-specific content
|
||||
- `{% block script %}` - JavaScript section for client-side functionality
|
||||
- `{% block nav_bar_left_items %}` - Left side navigation menu items
|
||||
- `{% block nav_bar_right_items %}` - Right side navigation menu items
|
||||
|
||||
## JavaScript Integration
|
||||
|
||||
Custom templates often need JavaScript for interactive features. JupyterHub loads jQuery and other libraries via RequireJS module system. All custom JavaScript must be wrapped in RequireJS `require()` calls to ensure proper dependency loading.
|
||||
|
||||
**RequireJS Pattern:**
|
||||
```javascript
|
||||
<script>
|
||||
require(["jquery"], function($) {
|
||||
"use strict";
|
||||
// Your code here with jQuery as $
|
||||
});
|
||||
</script>
|
||||
```
|
||||
|
||||
This pattern ensures jQuery is loaded before the custom code executes. Without this wrapper, custom JavaScript may fail with "$ is not defined" errors. The platform's custom handlers use this pattern for all interactive features including volume management modals and server restart buttons.
|
||||
|
||||
## Bootstrap 5 Compatibility
|
||||
|
||||
JupyterHub 5.4.2 uses Bootstrap 5 for UI components. Custom templates must use Bootstrap 5 syntax, not Bootstrap 4. Modal triggers use `data-bs-toggle` and `data-bs-target` attributes instead of older `data-toggle` and `data-target` attributes.
|
||||
|
||||
**Bootstrap 5 Modal Example:**
|
||||
```html
|
||||
<button type="button" class="btn btn-primary"
|
||||
data-bs-toggle="modal"
|
||||
data-bs-target="#myModal">
|
||||
Open Modal
|
||||
</button>
|
||||
```
|
||||
|
||||
The close button in modals uses `btn-close` class instead of custom HTML with `×` entity. These differences are critical - Bootstrap 4 syntax silently fails in Bootstrap 5 without error messages.
|
||||
|
||||
## Font Awesome Icons
|
||||
|
||||
The platform uses Font Awesome icons to enhance button visibility and user experience. Icons are added using `<i>` tags with appropriate classes.
|
||||
|
||||
**Icon Examples:**
|
||||
- Server restart: `<i class="fa fa-rotate" aria-hidden="true"></i>`
|
||||
- Volume management: `<i class="fa fa-database" aria-hidden="true"></i>`
|
||||
- Server stop: `<i class="fa fa-stop" aria-hidden="true"></i>`
|
||||
- Server start: `<i class="fa fa-play" aria-hidden="true"></i>`
|
||||
|
||||
Icons should include `aria-hidden="true"` attribute to prevent screen readers from announcing them redundantly when button text is already present.
|
||||
|
||||
## Custom API Handlers
|
||||
|
||||
Templates interact with custom API handlers registered in `jupyterhub_config.py`. These handlers extend JupyterHub's REST API with new endpoints for platform-specific features.
|
||||
|
||||
The platform registers handlers using `c.JupyterHub.extra_handlers` configuration:
|
||||
```python
|
||||
c.JupyterHub.extra_handlers = [
|
||||
(r'/api/users/([^/]+)/manage-volumes', ManageVolumesHandler),
|
||||
(r'/api/users/([^/]+)/restart-server', RestartServerHandler),
|
||||
(r'/api/notifications/broadcast', BroadcastNotificationHandler),
|
||||
(r'/notifications', NotificationsPageHandler),
|
||||
]
|
||||
```
|
||||
|
||||
Each handler URL pattern uses regex to extract route parameters like username. Handlers must be imported from `custom_handlers.py` module at the top of the config file.
|
||||
|
||||
## CSRF Protection
|
||||
|
||||
All POST requests from custom templates must include XSRF token for security. JupyterHub provides this token automatically in templates via `{{ xsrf_form_html() }}` or through cookies accessible from JavaScript.
|
||||
|
||||
**AJAX Request with XSRF:**
|
||||
```javascript
|
||||
$.ajax({
|
||||
url: '/hub/api/users/konrad/restart-server',
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'X-XSRFToken': getCookie('_xsrf')
|
||||
},
|
||||
success: function(data) { /* handle success */ }
|
||||
});
|
||||
```
|
||||
|
||||
Missing XSRF tokens result in 403 Forbidden errors. The platform's custom handlers automatically validate XSRF tokens on all POST, PUT, and DELETE requests.
|
||||
|
||||
## Build Process
|
||||
|
||||
Custom templates are copied into the Docker image during build. The Dockerfile includes:
|
||||
```dockerfile
|
||||
COPY --chmod=644 services/jupyterhub/templates/*.html /srv/jupyterhub/templates/
|
||||
```
|
||||
|
||||
Changes to templates require rebuilding the Docker image and recreating the JupyterHub container. The build process should use `--no-cache` flag to ensure template changes are not cached from previous builds.
|
||||
|
||||
**Rebuild Command:**
|
||||
```bash
|
||||
docker compose -f compose.yml build --no-cache jupyterhub
|
||||
docker stop stellars-jupyterhub-ds-jupyterhub
|
||||
docker rm stellars-jupyterhub-ds-jupyterhub
|
||||
docker compose up -d jupyterhub
|
||||
```
|
||||
|
||||
Simple container restart with `docker restart` does not reload changed templates - the container must be recreated from the updated image.
|
||||
|
||||
## Testing Custom Templates
|
||||
|
||||
After deploying custom templates, verify they render correctly by accessing the relevant pages in a browser. Check browser console for JavaScript errors and network tab for failed API requests.
|
||||
|
||||
**Common Issues:**
|
||||
- 404 errors indicate template file not found in `/srv/jupyterhub/templates/`
|
||||
- JavaScript errors about undefined $ mean RequireJS wrapper is missing
|
||||
- Bootstrap modals not opening indicate Bootstrap 4 syntax instead of Bootstrap 5
|
||||
- 403 errors on POST requests indicate missing XSRF token
|
||||
- Buttons without icons mean Font Awesome not loaded or wrong class names
|
||||
|
||||
Template customization enables powerful extensions to JupyterHub without forking the core codebase. The platform demonstrates this with server restart, volume management, and notification broadcast features all built through template extensions.
|
||||
Reference in New Issue
Block a user