Add SEO fields to Tasks model, improve content generation response handling, and enhance progress bar animation

- Added primary_keyword, secondary_keywords, tags, and categories fields to Tasks model
- Updated generate_content function to handle full JSON response with all SEO fields
- Improved progress bar animation: smooth 1% increments every 300ms
- Enhanced step detection for content generation vs clustering vs ideas
- Fixed progress modal to show correct messages for each function type
- Added comprehensive logging to Keywords and Tasks pages for AI functions
- Fixed error handling to show meaningful error messages instead of generic failures
This commit is contained in:
Gitea Deploy
2025-11-09 21:22:34 +00:00
parent 09d22ab0e2
commit 961362e088
17340 changed files with 10636 additions and 2248776 deletions

View File

@@ -1 +0,0 @@
auto-test-1762687608

View File

@@ -1 +0,0 @@
final-auto-test-1762687631

View File

@@ -1 +0,0 @@
final-test-1762686854

View File

@@ -1 +0,0 @@
hook-fix-test-1762687555

View File

@@ -1 +0,0 @@
hook-test-1762687079

View File

@@ -1 +0,0 @@
test-1762686342

View File

@@ -1 +0,0 @@
test-1762686375

View File

@@ -1 +0,0 @@
# Webhook test - Sun Nov 9 10:50:47 UTC 2025

5
=5.9.0
View File

@@ -1,5 +0,0 @@
Collecting psutil
Downloading psutil-7.1.3-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl.metadata (23 kB)
Downloading psutil-7.1.3-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl (263 kB)
Installing collected packages: psutil
Successfully installed psutil-7.1.3

11
=7.0.0
View File

@@ -1,11 +0,0 @@
Collecting docker
Downloading docker-7.1.0-py3-none-any.whl.metadata (3.8 kB)
Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.11/site-packages (from docker) (2.32.5)
Requirement already satisfied: urllib3>=1.26.0 in /usr/local/lib/python3.11/site-packages (from docker) (2.5.0)
Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.11/site-packages (from requests>=2.26.0->docker) (3.4.4)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/site-packages (from requests>=2.26.0->docker) (3.11)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/site-packages (from requests>=2.26.0->docker) (2025.10.5)
Downloading docker-7.1.0-py3-none-any.whl (147 kB)
Installing collected packages: docker
Successfully installed docker-7.1.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.

View File

@@ -1,81 +0,0 @@
# Plan Limits Comparison Table
## Accounts
1. **dev@igny8.com** (Developer Account)
2. **scale@igny8.com** (Scale Account)
## Complete Limit Comparison Table
| Limit Category | Limit Name | dev@igny8.com | scale@igny8.com | Notes |
|----------------|-------------|---------------|-----------------|-------|
| **GENERAL LIMITS** | | | | |
| | Max Users | TBD | TBD | Total users allowed per account |
| | Max Sites | TBD | TBD | Maximum number of sites allowed |
| **PLANNER LIMITS** | | | | |
| | Max Keywords | TBD | TBD | Total keywords allowed (global limit) |
| | Max Clusters | TBD | TBD | Total clusters allowed (global) |
| | Max Content Ideas | TBD | TBD | Total content ideas allowed (global limit) |
| | Daily Cluster Limit | TBD | TBD | Max clusters that can be created per day |
| **WRITER LIMITS** | | | | |
| | Monthly Word Count Limit | TBD | TBD | Monthly word limit (for generated content) |
| | Daily Content Tasks | TBD | TBD | Max number of content tasks (blogs) per day |
| **IMAGE LIMITS** | | | | |
| | Monthly Image Count | TBD | TBD | Max images per month |
| | Daily Image Generation Limit | TBD | TBD | Max images that can be generated per day |
| **AI CREDITS** | | | | |
| | Monthly AI Credit Limit | TBD | TBD | Unified credit ceiling per month (all AI functions) |
| | Monthly Cluster AI Credits | TBD | TBD | AI credits allocated for clustering |
| | Monthly Content AI Credits | TBD | TBD | AI credit pool for content generation |
| | Monthly Image AI Credits | TBD | TBD | AI credit pool for image generation |
| | Credits Per Month (Effective) | TBD | TBD | Effective credits (included_credits or credits_per_month) |
## Current Status from Images
### Image 1 (dev@igny8.com):
- **Page Title**: "Usage" ✓ (Changed successfully)
- **Debug Info**: `Loading=No, Limits=0, Planner=0, Writer=0, Images=0, AI=0, General=0`
- **Error Message**: "No usage limits data available"
- **Status**: API endpoint not returning data or account has no plan assigned
### Image 2 (scale@igny8.com):
- **Page Title**: "Usage" ✓ (Changed successfully)
- **Debug Info**: `Loading=No, Limits=0, Planner=0, Writer=0, Images=0, AI=0, General=0`
- **Error Message**: "No usage limits data available"
- **Status**: API endpoint not returning data or account has no plan assigned
## Issue Identified
Both accounts show `Limits=0`, which means:
1. The API endpoint `/v1/billing/credits/usage/limits/` is being called
2. But it's returning an empty array `[]` or the accounts don't have plans
3. The frontend correctly shows the error message when no data is available
## Next Steps
To get the actual limit values, we need to:
1. Check if accounts have plans assigned in the database
2. If plans exist, verify the API endpoint is correctly querying and returning the data
3. If no plans exist, assign plans to the accounts
## API Endpoint Details
- **URL**: `/v1/billing/credits/usage/limits/`
- **Method**: GET
- **Authentication**: Required (JWT or Session)
- **Response Format**:
```json
{
"limits": [
{
"title": "Keywords",
"limit": 1000,
"used": 0,
"available": 1000,
"unit": "keywords",
"category": "planner",
"percentage": 0.0
},
...
]
}
```

View File

@@ -1,247 +0,0 @@
<#
.Synopsis
Activate a Python virtual environment for the current PowerShell session.
.Description
Pushes the python executable for a virtual environment to the front of the
$Env:PATH environment variable and sets the prompt to signify that you are
in a Python virtual environment. Makes use of the command line switches as
well as the `pyvenv.cfg` file values present in the virtual environment.
.Parameter VenvDir
Path to the directory that contains the virtual environment to activate. The
default value for this is the parent of the directory that the Activate.ps1
script is located within.
.Parameter Prompt
The prompt prefix to display when this virtual environment is activated. By
default, this prompt is the name of the virtual environment folder (VenvDir)
surrounded by parentheses and followed by a single space (ie. '(.venv) ').
.Example
Activate.ps1
Activates the Python virtual environment that contains the Activate.ps1 script.
.Example
Activate.ps1 -Verbose
Activates the Python virtual environment that contains the Activate.ps1 script,
and shows extra information about the activation as it executes.
.Example
Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv
Activates the Python virtual environment located in the specified location.
.Example
Activate.ps1 -Prompt "MyPython"
Activates the Python virtual environment that contains the Activate.ps1 script,
and prefixes the current prompt with the specified string (surrounded in
parentheses) while the virtual environment is active.
.Notes
On Windows, it may be required to enable this Activate.ps1 script by setting the
execution policy for the user. You can do this by issuing the following PowerShell
command:
PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
For more information on Execution Policies:
https://go.microsoft.com/fwlink/?LinkID=135170
#>
Param(
[Parameter(Mandatory = $false)]
[String]
$VenvDir,
[Parameter(Mandatory = $false)]
[String]
$Prompt
)
<# Function declarations --------------------------------------------------- #>
<#
.Synopsis
Remove all shell session elements added by the Activate script, including the
addition of the virtual environment's Python executable from the beginning of
the PATH variable.
.Parameter NonDestructive
If present, do not remove this function from the global namespace for the
session.
#>
function global:deactivate ([switch]$NonDestructive) {
# Revert to original values
# The prior prompt:
if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) {
Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt
Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT
}
# The prior PYTHONHOME:
if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) {
Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME
Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME
}
# The prior PATH:
if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) {
Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH
Remove-Item -Path Env:_OLD_VIRTUAL_PATH
}
# Just remove the VIRTUAL_ENV altogether:
if (Test-Path -Path Env:VIRTUAL_ENV) {
Remove-Item -Path env:VIRTUAL_ENV
}
# Just remove VIRTUAL_ENV_PROMPT altogether.
if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) {
Remove-Item -Path env:VIRTUAL_ENV_PROMPT
}
# Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether:
if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) {
Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force
}
# Leave deactivate function in the global namespace if requested:
if (-not $NonDestructive) {
Remove-Item -Path function:deactivate
}
}
<#
.Description
Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the
given folder, and returns them in a map.
For each line in the pyvenv.cfg file, if that line can be parsed into exactly
two strings separated by `=` (with any amount of whitespace surrounding the =)
then it is considered a `key = value` line. The left hand string is the key,
the right hand is the value.
If the value starts with a `'` or a `"` then the first and last character is
stripped from the value before being captured.
.Parameter ConfigDir
Path to the directory that contains the `pyvenv.cfg` file.
#>
function Get-PyVenvConfig(
[String]
$ConfigDir
) {
Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg"
# Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue).
$pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue
# An empty map will be returned if no config file is found.
$pyvenvConfig = @{ }
if ($pyvenvConfigPath) {
Write-Verbose "File exists, parse `key = value` lines"
$pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath
$pyvenvConfigContent | ForEach-Object {
$keyval = $PSItem -split "\s*=\s*", 2
if ($keyval[0] -and $keyval[1]) {
$val = $keyval[1]
# Remove extraneous quotations around a string value.
if ("'""".Contains($val.Substring(0, 1))) {
$val = $val.Substring(1, $val.Length - 2)
}
$pyvenvConfig[$keyval[0]] = $val
Write-Verbose "Adding Key: '$($keyval[0])'='$val'"
}
}
}
return $pyvenvConfig
}
<# Begin Activate script --------------------------------------------------- #>
# Determine the containing directory of this script
$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
$VenvExecDir = Get-Item -Path $VenvExecPath
Write-Verbose "Activation script is located in path: '$VenvExecPath'"
Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)"
Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)"
# Set values required in priority: CmdLine, ConfigFile, Default
# First, get the location of the virtual environment, it might not be
# VenvExecDir if specified on the command line.
if ($VenvDir) {
Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values"
}
else {
Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir."
$VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/")
Write-Verbose "VenvDir=$VenvDir"
}
# Next, read the `pyvenv.cfg` file to determine any required value such
# as `prompt`.
$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir
# Next, set the prompt from the command line, or the config file, or
# just use the name of the virtual environment folder.
if ($Prompt) {
Write-Verbose "Prompt specified as argument, using '$Prompt'"
}
else {
Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value"
if ($pyvenvCfg -and $pyvenvCfg['prompt']) {
Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'"
$Prompt = $pyvenvCfg['prompt'];
}
else {
Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)"
Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'"
$Prompt = Split-Path -Path $venvDir -Leaf
}
}
Write-Verbose "Prompt = '$Prompt'"
Write-Verbose "VenvDir='$VenvDir'"
# Deactivate any currently active virtual environment, but leave the
# deactivate function in place.
deactivate -nondestructive
# Now set the environment variable VIRTUAL_ENV, used by many tools to determine
# that there is an activated venv.
$env:VIRTUAL_ENV = $VenvDir
if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) {
Write-Verbose "Setting prompt to '$Prompt'"
# Set the prompt to include the env name
# Make sure _OLD_VIRTUAL_PROMPT is global
function global:_OLD_VIRTUAL_PROMPT { "" }
Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT
New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt
function global:prompt {
Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) "
_OLD_VIRTUAL_PROMPT
}
$env:VIRTUAL_ENV_PROMPT = $Prompt
}
# Clear PYTHONHOME
if (Test-Path -Path Env:PYTHONHOME) {
Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME
Remove-Item -Path Env:PYTHONHOME
}
# Add the venv to the PATH
Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH
$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH"

View File

@@ -1,70 +0,0 @@
# This file must be used with "source bin/activate" *from bash*
# You cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# Call hash to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
hash -r 2> /dev/null
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
unset VIRTUAL_ENV_PROMPT
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
# on Windows, a path can contain colons and backslashes and has to be converted:
if [ "${OSTYPE:-}" = "cygwin" ] || [ "${OSTYPE:-}" = "msys" ] ; then
# transform D:\path\to\venv to /d/path/to/venv on MSYS
# and to /cygdrive/d/path/to/venv on Cygwin
export VIRTUAL_ENV=$(cygpath /data/app/igny8/backend/.venv)
else
# use the path as-is
export VIRTUAL_ENV=/data/app/igny8/backend/.venv
fi
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/"bin":$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
PS1='(.venv) '"${PS1:-}"
export PS1
VIRTUAL_ENV_PROMPT='(.venv) '
export VIRTUAL_ENV_PROMPT
fi
# Call hash to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
hash -r 2> /dev/null

View File

@@ -1,27 +0,0 @@
# This file must be used with "source bin/activate.csh" *from csh*.
# You cannot run it directly.
# Created by Davide Di Blasi <davidedb@gmail.com>.
# Ported to Python 3.3 venv by Andrew Svetlov <andrew.svetlov@gmail.com>
alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate'
# Unset irrelevant variables.
deactivate nondestructive
setenv VIRTUAL_ENV /data/app/igny8/backend/.venv
set _OLD_VIRTUAL_PATH="$PATH"
setenv PATH "$VIRTUAL_ENV/"bin":$PATH"
set _OLD_VIRTUAL_PROMPT="$prompt"
if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then
set prompt = '(.venv) '"$prompt"
setenv VIRTUAL_ENV_PROMPT '(.venv) '
endif
alias pydoc python -m pydoc
rehash

View File

@@ -1,69 +0,0 @@
# This file must be used with "source <venv>/bin/activate.fish" *from fish*
# (https://fishshell.com/). You cannot run it directly.
function deactivate -d "Exit virtual environment and return to normal shell environment"
# reset old environment variables
if test -n "$_OLD_VIRTUAL_PATH"
set -gx PATH $_OLD_VIRTUAL_PATH
set -e _OLD_VIRTUAL_PATH
end
if test -n "$_OLD_VIRTUAL_PYTHONHOME"
set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME
set -e _OLD_VIRTUAL_PYTHONHOME
end
if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
set -e _OLD_FISH_PROMPT_OVERRIDE
# prevents error when using nested fish instances (Issue #93858)
if functions -q _old_fish_prompt
functions -e fish_prompt
functions -c _old_fish_prompt fish_prompt
functions -e _old_fish_prompt
end
end
set -e VIRTUAL_ENV
set -e VIRTUAL_ENV_PROMPT
if test "$argv[1]" != "nondestructive"
# Self-destruct!
functions -e deactivate
end
end
# Unset irrelevant variables.
deactivate nondestructive
set -gx VIRTUAL_ENV /data/app/igny8/backend/.venv
set -gx _OLD_VIRTUAL_PATH $PATH
set -gx PATH "$VIRTUAL_ENV/"bin $PATH
# Unset PYTHONHOME if set.
if set -q PYTHONHOME
set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME
set -e PYTHONHOME
end
if test -z "$VIRTUAL_ENV_DISABLE_PROMPT"
# fish uses a function instead of an env var to generate the prompt.
# Save the current fish_prompt function as the function _old_fish_prompt.
functions -c fish_prompt _old_fish_prompt
# With the original prompt function renamed, we can override with our own.
function fish_prompt
# Save the return status of the last command.
set -l old_status $status
# Output the venv prompt; color taken from the blue of the Python logo.
printf "%s%s%s" (set_color 4B8BBE) '(.venv) ' (set_color normal)
# Restore the return status of the previous command.
echo "exit $old_status" | .
# Output the original/"old" prompt.
_old_fish_prompt
end
set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV"
set -gx VIRTUAL_ENV_PROMPT '(.venv) '
end

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from celery.__main__ import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from django.core.management import execute_from_command_line
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(execute_from_command_line())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from gunicorn.app.wsgiapp import run
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(run())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from charset_normalizer.cli import cli_detect
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(cli_detect())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from pip._internal.cli.main import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from pip._internal.cli.main import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from pip._internal.cli.main import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())

View File

@@ -1 +0,0 @@
python3

View File

@@ -1 +0,0 @@
/usr/bin/python3

View File

@@ -1 +0,0 @@
python3

View File

@@ -1,8 +0,0 @@
#!/data/app/igny8/backend/.venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from sqlparse.__main__ import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())

View File

@@ -1,7 +0,0 @@
Authors
=======
``pyjwt`` is currently written and maintained by `Jose Padilla <https://github.com/jpadilla>`_.
Originally written and maintained by `Jeff Lindsay <https://github.com/progrium>`_.
A full list of contributors can be found on GitHubs `overview <https://github.com/jpadilla/pyjwt/graphs/contributors>`_.

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015-2022 José Padilla
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,106 +0,0 @@
Metadata-Version: 2.1
Name: PyJWT
Version: 2.10.1
Summary: JSON Web Token implementation in Python
Author-email: Jose Padilla <hello@jpadilla.com>
License: MIT
Project-URL: Homepage, https://github.com/jpadilla/pyjwt
Keywords: json,jwt,security,signing,token,web
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Utilities
Requires-Python: >=3.9
Description-Content-Type: text/x-rst
License-File: LICENSE
License-File: AUTHORS.rst
Provides-Extra: crypto
Requires-Dist: cryptography>=3.4.0; extra == "crypto"
Provides-Extra: dev
Requires-Dist: coverage[toml]==5.0.4; extra == "dev"
Requires-Dist: cryptography>=3.4.0; extra == "dev"
Requires-Dist: pre-commit; extra == "dev"
Requires-Dist: pytest<7.0.0,>=6.0.0; extra == "dev"
Requires-Dist: sphinx; extra == "dev"
Requires-Dist: sphinx-rtd-theme; extra == "dev"
Requires-Dist: zope.interface; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx; extra == "docs"
Requires-Dist: sphinx-rtd-theme; extra == "docs"
Requires-Dist: zope.interface; extra == "docs"
Provides-Extra: tests
Requires-Dist: coverage[toml]==5.0.4; extra == "tests"
Requires-Dist: pytest<7.0.0,>=6.0.0; extra == "tests"
PyJWT
=====
.. image:: https://github.com/jpadilla/pyjwt/workflows/CI/badge.svg
:target: https://github.com/jpadilla/pyjwt/actions?query=workflow%3ACI
.. image:: https://img.shields.io/pypi/v/pyjwt.svg
:target: https://pypi.python.org/pypi/pyjwt
.. image:: https://codecov.io/gh/jpadilla/pyjwt/branch/master/graph/badge.svg
:target: https://codecov.io/gh/jpadilla/pyjwt
.. image:: https://readthedocs.org/projects/pyjwt/badge/?version=stable
:target: https://pyjwt.readthedocs.io/en/stable/
A Python implementation of `RFC 7519 <https://tools.ietf.org/html/rfc7519>`_. Original implementation was written by `@progrium <https://github.com/progrium>`_.
Sponsor
-------
.. |auth0-logo| image:: https://github.com/user-attachments/assets/ee98379e-ee76-4bcb-943a-e25c4ea6d174
:width: 160px
+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| |auth0-logo| | If you want to quickly add secure token-based authentication to Python projects, feel free to check Auth0's Python SDK and free plan at `auth0.com/signup <https://auth0.com/signup?utm_source=external_sites&utm_medium=pyjwt&utm_campaign=devn_signup>`_. |
+--------------+-----------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Installing
----------
Install with **pip**:
.. code-block:: console
$ pip install PyJWT
Usage
-----
.. code-block:: pycon
>>> import jwt
>>> encoded = jwt.encode({"some": "payload"}, "secret", algorithm="HS256")
>>> print(encoded)
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb21lIjoicGF5bG9hZCJ9.4twFt5NiznN84AWoo1d7KO1T_yoc0Z6XOpOVswacPZg
>>> jwt.decode(encoded, "secret", algorithms=["HS256"])
{'some': 'payload'}
Documentation
-------------
View the full docs online at https://pyjwt.readthedocs.io/en/stable/
Tests
-----
You can run tests from the project root after cloning with:
.. code-block:: console
$ tox

View File

@@ -1,33 +0,0 @@
PyJWT-2.10.1.dist-info/AUTHORS.rst,sha256=klzkNGECnu2_VY7At89_xLBF3vUSDruXk3xwgUBxzwc,322
PyJWT-2.10.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
PyJWT-2.10.1.dist-info/LICENSE,sha256=eXp6ICMdTEM-nxkR2xcx0GtYKLmPSZgZoDT3wPVvXOU,1085
PyJWT-2.10.1.dist-info/METADATA,sha256=EkewF6D6KU8SGaaQzVYfxUUU1P_gs_dp1pYTkoYvAx8,3990
PyJWT-2.10.1.dist-info/RECORD,,
PyJWT-2.10.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
PyJWT-2.10.1.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
PyJWT-2.10.1.dist-info/top_level.txt,sha256=RP5DHNyJbMq2ka0FmfTgoSaQzh7e3r5XuCWCO8a00k8,4
jwt/__init__.py,sha256=VB2vFKuboTjcDGeZ8r-UqK_dz3NsQSQEqySSICby8Xg,1711
jwt/__pycache__/__init__.cpython-312.pyc,,
jwt/__pycache__/algorithms.cpython-312.pyc,,
jwt/__pycache__/api_jwk.cpython-312.pyc,,
jwt/__pycache__/api_jws.cpython-312.pyc,,
jwt/__pycache__/api_jwt.cpython-312.pyc,,
jwt/__pycache__/exceptions.cpython-312.pyc,,
jwt/__pycache__/help.cpython-312.pyc,,
jwt/__pycache__/jwk_set_cache.cpython-312.pyc,,
jwt/__pycache__/jwks_client.cpython-312.pyc,,
jwt/__pycache__/types.cpython-312.pyc,,
jwt/__pycache__/utils.cpython-312.pyc,,
jwt/__pycache__/warnings.cpython-312.pyc,,
jwt/algorithms.py,sha256=cKr-XEioe0mBtqJMCaHEswqVOA1Z8Purt5Sb3Bi-5BE,30409
jwt/api_jwk.py,sha256=6F1r7rmm8V5qEnBKA_xMjS9R7VoANe1_BL1oD2FrAjE,4451
jwt/api_jws.py,sha256=aM8vzqQf6mRrAw7bRy-Moj_pjWsKSVQyYK896AfMjJU,11762
jwt/api_jwt.py,sha256=OGT4hok1l5A6FH_KdcrU5g6u6EQ8B7em0r9kGM9SYgA,14512
jwt/exceptions.py,sha256=bUIOJ-v9tjopTLS-FYOTc3kFx5WP5IZt7ksN_HE1G9Q,1211
jwt/help.py,sha256=vFdNzjQoAch04XCMYpCkyB2blaqHAGAqQrtf9nSPkdk,1808
jwt/jwk_set_cache.py,sha256=hBKmN-giU7-G37L_XKgc_OZu2ah4wdbj1ZNG_GkoSE8,959
jwt/jwks_client.py,sha256=p9b-IbQqo2tEge9Zit3oSPBFNePqwho96VLbnUrHUWs,4259
jwt/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
jwt/types.py,sha256=VnhGv_VFu5a7_mrPoSCB7HaNLrJdhM8Sq1sSfEg0gLU,99
jwt/utils.py,sha256=hxOjvDBheBYhz-RIPiEz7Q88dSUSTMzEdKE_Ww2VdJw,3640
jwt/warnings.py,sha256=50XWOnyNsIaqzUJTk6XHNiIDykiL763GYA92MjTKmok,59

View File

@@ -1,5 +0,0 @@
Wheel-Version: 1.0
Generator: setuptools (75.6.0)
Root-Is-Purelib: true
Tag: py3-none-any

View File

@@ -1,47 +0,0 @@
Copyright (c) 2015-2016 Ask Solem & contributors. All rights reserved.
Copyright (c) 2012-2014 GoPivotal, Inc. All rights reserved.
Copyright (c) 2009, 2010, 2011, 2012 Ask Solem, and individual contributors. All rights reserved.
Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>. All rights reserved.
py-amqp is licensed under The BSD License (3 Clause, also known as
the new BSD license). The license is an OSI approved Open Source
license and is GPL-compatible(1).
The license text can also be found here:
http://www.opensource.org/licenses/BSD-3-Clause
License
=======
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Ask Solem, nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL Ask Solem OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Footnotes
=========
(1) A GPL-compatible license makes it possible to
combine Celery with other software that is released
under the GPL, it does not mean that we're distributing
Celery under the GPL license. The BSD license, unlike the GPL,
let you distribute a modified version without making your
changes open source.

View File

@@ -1,239 +0,0 @@
Metadata-Version: 2.1
Name: amqp
Version: 5.3.1
Summary: Low-level AMQP client for Python (fork of amqplib).
Home-page: http://github.com/celery/py-amqp
Author: Barry Pederson
Author-email: auvipy@gmail.com
Maintainer: Asif Saif Uddin, Matus Valo
License: BSD
Keywords: amqp rabbitmq cloudamqp messaging
Platform: any
Classifier: Development Status :: 5 - Production/Stable
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: License :: OSI Approved :: BSD License
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Requires-Python: >=3.6
Description-Content-Type: text/x-rst
License-File: LICENSE
Requires-Dist: vine<6.0.0,>=5.0.0
=====================================================================
Python AMQP 0.9.1 client library
=====================================================================
|build-status| |coverage| |license| |wheel| |pyversion| |pyimp|
:Version: 5.3.1
:Web: https://amqp.readthedocs.io/
:Download: https://pypi.org/project/amqp/
:Source: http://github.com/celery/py-amqp/
:Keywords: amqp, rabbitmq
About
=====
This is a fork of amqplib_ which was originally written by Barry Pederson.
It is maintained by the Celery_ project, and used by `kombu`_ as a pure python
alternative when `librabbitmq`_ is not available.
This library should be API compatible with `librabbitmq`_.
.. _amqplib: https://pypi.org/project/amqplib/
.. _Celery: http://celeryproject.org/
.. _kombu: https://kombu.readthedocs.io/
.. _librabbitmq: https://pypi.org/project/librabbitmq/
Differences from `amqplib`_
===========================
- Supports draining events from multiple channels (``Connection.drain_events``)
- Support for timeouts
- Channels are restored after channel error, instead of having to close the
connection.
- Support for heartbeats
- ``Connection.heartbeat_tick(rate=2)`` must called at regular intervals
(half of the heartbeat value if rate is 2).
- Or some other scheme by using ``Connection.send_heartbeat``.
- Supports RabbitMQ extensions:
- Consumer Cancel Notifications
- by default a cancel results in ``ChannelError`` being raised
- but not if a ``on_cancel`` callback is passed to ``basic_consume``.
- Publisher confirms
- ``Channel.confirm_select()`` enables publisher confirms.
- ``Channel.events['basic_ack'].append(my_callback)`` adds a callback
to be called when a message is confirmed. This callback is then
called with the signature ``(delivery_tag, multiple)``.
- Exchange-to-exchange bindings: ``exchange_bind`` / ``exchange_unbind``.
- ``Channel.confirm_select()`` enables publisher confirms.
- ``Channel.events['basic_ack'].append(my_callback)`` adds a callback
to be called when a message is confirmed. This callback is then
called with the signature ``(delivery_tag, multiple)``.
- Authentication Failure Notifications
Instead of just closing the connection abruptly on invalid
credentials, py-amqp will raise an ``AccessRefused`` error
when connected to rabbitmq-server 3.2.0 or greater.
- Support for ``basic_return``
- Uses AMQP 0-9-1 instead of 0-8.
- ``Channel.access_request`` and ``ticket`` arguments to methods
**removed**.
- Supports the ``arguments`` argument to ``basic_consume``.
- ``internal`` argument to ``exchange_declare`` removed.
- ``auto_delete`` argument to ``exchange_declare`` deprecated
- ``insist`` argument to ``Connection`` removed.
- ``Channel.alerts`` has been removed.
- Support for ``Channel.basic_recover_async``.
- ``Channel.basic_recover`` deprecated.
- Exceptions renamed to have idiomatic names:
- ``AMQPException`` -> ``AMQPError``
- ``AMQPConnectionException`` -> ConnectionError``
- ``AMQPChannelException`` -> ChannelError``
- ``Connection.known_hosts`` removed.
- ``Connection`` no longer supports redirects.
- ``exchange`` argument to ``queue_bind`` can now be empty
to use the "default exchange".
- Adds ``Connection.is_alive`` that tries to detect
whether the connection can still be used.
- Adds ``Connection.connection_errors`` and ``.channel_errors``,
a list of recoverable errors.
- Exposes the underlying socket as ``Connection.sock``.
- Adds ``Channel.no_ack_consumers`` to keep track of consumer tags
that set the no_ack flag.
- Slightly better at error recovery
Quick overview
==============
Simple producer publishing messages to ``test`` queue using default exchange:
.. code:: python
import amqp
with amqp.Connection('broker.example.com') as c:
ch = c.channel()
ch.basic_publish(amqp.Message('Hello World'), routing_key='test')
Producer publishing to ``test_exchange`` exchange with publisher confirms enabled and using virtual_host ``test_vhost``:
.. code:: python
import amqp
with amqp.Connection(
'broker.example.com', exchange='test_exchange',
confirm_publish=True, virtual_host='test_vhost'
) as c:
ch = c.channel()
ch.basic_publish(amqp.Message('Hello World'), routing_key='test')
Consumer with acknowledgments enabled:
.. code:: python
import amqp
with amqp.Connection('broker.example.com') as c:
ch = c.channel()
def on_message(message):
print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body))
ch.basic_ack(message.delivery_tag)
ch.basic_consume(queue='test', callback=on_message)
while True:
c.drain_events()
Consumer with acknowledgments disabled:
.. code:: python
import amqp
with amqp.Connection('broker.example.com') as c:
ch = c.channel()
def on_message(message):
print('Received message (delivery tag: {}): {}'.format(message.delivery_tag, message.body))
ch.basic_consume(queue='test', callback=on_message, no_ack=True)
while True:
c.drain_events()
Speedups
========
This library has **experimental** support of speedups. Speedups are implemented using Cython. To enable speedups, ``CELERY_ENABLE_SPEEDUPS`` environment variable must be set during building/installation.
Currently speedups can be installed:
1. using source package (using ``--no-binary`` switch):
.. code:: shell
CELERY_ENABLE_SPEEDUPS=true pip install --no-binary :all: amqp
2. building directly source code:
.. code:: shell
CELERY_ENABLE_SPEEDUPS=true python setup.py install
Further
=======
- Differences between AMQP 0.8 and 0.9.1
http://www.rabbitmq.com/amqp-0-8-to-0-9-1.html
- AMQP 0.9.1 Quick Reference
http://www.rabbitmq.com/amqp-0-9-1-quickref.html
- RabbitMQ Extensions
http://www.rabbitmq.com/extensions.html
- For more information about AMQP, visit
http://www.amqp.org
- For other Python client libraries see:
http://www.rabbitmq.com/devtools.html#python-dev
.. |build-status| image:: https://github.com/celery/py-amqp/actions/workflows/ci.yaml/badge.svg
:alt: Build status
:target: https://github.com/celery/py-amqp/actions/workflows/ci.yaml
.. |coverage| image:: https://codecov.io/github/celery/py-amqp/coverage.svg?branch=main
:target: https://codecov.io/github/celery/py-amqp?branch=main
.. |license| image:: https://img.shields.io/pypi/l/amqp.svg
:alt: BSD License
:target: https://opensource.org/licenses/BSD-3-Clause
.. |wheel| image:: https://img.shields.io/pypi/wheel/amqp.svg
:alt: Python AMQP can be installed via wheel
:target: https://pypi.org/project/amqp/
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/amqp.svg
:alt: Supported Python versions.
:target: https://pypi.org/project/amqp/
.. |pyimp| image:: https://img.shields.io/pypi/implementation/amqp.svg
:alt: Support Python implementations.
:target: https://pypi.org/project/amqp/
py-amqp as part of the Tidelift Subscription
============================================
The maintainers of py-amqp and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/pypi-amqp?utm_source=pypi-amqp&utm_medium=referral&utm_campaign=readme&utm_term=repo)

View File

@@ -1,34 +0,0 @@
amqp-5.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
amqp-5.3.1.dist-info/LICENSE,sha256=9e9fEoLq4ZMcdGRfhxm2xps9aizyd7_aJJqCcM1HOvM,2372
amqp-5.3.1.dist-info/METADATA,sha256=sv93q3ZseR0T9pcxMMq8Jt_pxL0PNI_cbKA48tbprNM,8887
amqp-5.3.1.dist-info/RECORD,,
amqp-5.3.1.dist-info/WHEEL,sha256=a7TGlA-5DaHMRrarXjVbQagU3Man_dCnGIWMJr5kRWo,91
amqp-5.3.1.dist-info/top_level.txt,sha256=tWQNmFVhU4UtDgB6Yy2lKqRz7LtOrRcN8_bPFVcVVR8,5
amqp/__init__.py,sha256=QvARRZLvrDJRy_JCybG6TmprblyQPyF1pzIgR3fNRv4,2357
amqp/__pycache__/__init__.cpython-312.pyc,,
amqp/__pycache__/abstract_channel.cpython-312.pyc,,
amqp/__pycache__/basic_message.cpython-312.pyc,,
amqp/__pycache__/channel.cpython-312.pyc,,
amqp/__pycache__/connection.cpython-312.pyc,,
amqp/__pycache__/exceptions.cpython-312.pyc,,
amqp/__pycache__/method_framing.cpython-312.pyc,,
amqp/__pycache__/platform.cpython-312.pyc,,
amqp/__pycache__/protocol.cpython-312.pyc,,
amqp/__pycache__/sasl.cpython-312.pyc,,
amqp/__pycache__/serialization.cpython-312.pyc,,
amqp/__pycache__/spec.cpython-312.pyc,,
amqp/__pycache__/transport.cpython-312.pyc,,
amqp/__pycache__/utils.cpython-312.pyc,,
amqp/abstract_channel.py,sha256=D_OEWvX48yKUzMYm_sN-IDRQmqIGvegi9KlJriqttBc,4941
amqp/basic_message.py,sha256=Q8DV31tuuphloTETPHiJFwNg6b5M6pccJ0InJ4MZUz8,3357
amqp/channel.py,sha256=XzCuKPy9qFMiTsnqksKpFIh9PUcKZm3uIXm1RFCeZQs,74475
amqp/connection.py,sha256=8vsfpVTsTJBS-uu_SEEEuT-RXMk_IX_jCldOHP-oDlo,27541
amqp/exceptions.py,sha256=yqjoFIRue2rvK7kMdvkKsGOD6dMOzzzT3ZzBwoGWAe4,7166
amqp/method_framing.py,sha256=avnw90X9t4995HpHoZV4-1V73UEbzUKJ83pHEicAqWY,6734
amqp/platform.py,sha256=cyLevv6E15P9zhMo_fV84p67Q_A8fdsTq9amjvlUwqE,2379
amqp/protocol.py,sha256=Di3y6qqhnOV4QtkeYKO-zryfWqwl3F1zUxDOmVSsAp0,291
amqp/sasl.py,sha256=6AbsnxlbAyoiYxDezoQTfm-E0t_TJyHXpqGJ0KlPkI4,5986
amqp/serialization.py,sha256=xzzXmmQ45fGUuSCxGTEMizmRQTmzaz3Z7YYfpxmfXuY,17162
amqp/spec.py,sha256=2ZjbL4FR4Fv67HA7HUI9hLUIvAv3A4ZH6GRPzrMRyWg,2121
amqp/transport.py,sha256=tG50r-ybeXGwe3SoA5BacNY9BzRJnRn7BZs3XBuKwO0,23046
amqp/utils.py,sha256=JjjY040LwsDUc1zmKP2VTzXBioVXy48DUZtWB8PaPy0,1456

View File

@@ -1,5 +0,0 @@
Wheel-Version: 1.0
Generator: setuptools (75.4.0)
Root-Is-Purelib: true
Tag: py3-none-any

View File

@@ -1,75 +0,0 @@
"""Low-level AMQP client for Python (fork of amqplib)."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>
import re
from collections import namedtuple
__version__ = '5.3.1'
__author__ = 'Barry Pederson'
__maintainer__ = 'Asif Saif Uddin, Matus Valo'
__contact__ = 'auvipy@gmail.com'
__homepage__ = 'http://github.com/celery/py-amqp'
__docformat__ = 'restructuredtext'
# -eof meta-
version_info_t = namedtuple('version_info_t', (
'major', 'minor', 'micro', 'releaselevel', 'serial',
))
# bumpversion can only search for {current_version}
# so we have to parse the version here.
_temp = re.match(
r'(\d+)\.(\d+).(\d+)(.+)?', __version__).groups()
VERSION = version_info = version_info_t(
int(_temp[0]), int(_temp[1]), int(_temp[2]), _temp[3] or '', '')
del(_temp)
del(re)
from .basic_message import Message # noqa
from .channel import Channel # noqa
from .connection import Connection # noqa
from .exceptions import (AccessRefused, AMQPError, # noqa
AMQPNotImplementedError, ChannelError, ChannelNotOpen,
ConnectionError, ConnectionForced, ConsumerCancelled,
ContentTooLarge, FrameError, FrameSyntaxError,
InternalError, InvalidCommand, InvalidPath,
IrrecoverableChannelError,
IrrecoverableConnectionError, NoConsumers, NotAllowed,
NotFound, PreconditionFailed, RecoverableChannelError,
RecoverableConnectionError, ResourceError,
ResourceLocked, UnexpectedFrame, error_for_code)
from .utils import promise # noqa
__all__ = (
'Connection',
'Channel',
'Message',
'promise',
'AMQPError',
'ConnectionError',
'RecoverableConnectionError',
'IrrecoverableConnectionError',
'ChannelError',
'RecoverableChannelError',
'IrrecoverableChannelError',
'ConsumerCancelled',
'ContentTooLarge',
'NoConsumers',
'ConnectionForced',
'InvalidPath',
'AccessRefused',
'NotFound',
'ResourceLocked',
'PreconditionFailed',
'FrameError',
'FrameSyntaxError',
'InvalidCommand',
'ChannelNotOpen',
'UnexpectedFrame',
'ResourceError',
'NotAllowed',
'AMQPNotImplementedError',
'InternalError',
'error_for_code',
)

View File

@@ -1,163 +0,0 @@
"""Code common to Connection and Channel objects."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>)
import logging
from vine import ensure_promise, promise
from .exceptions import AMQPNotImplementedError, RecoverableConnectionError
from .serialization import dumps, loads
__all__ = ('AbstractChannel',)
AMQP_LOGGER = logging.getLogger('amqp')
IGNORED_METHOD_DURING_CHANNEL_CLOSE = """\
Received method %s during closing channel %s. This method will be ignored\
"""
class AbstractChannel:
"""Superclass for Connection and Channel.
The connection is treated as channel 0, then comes
user-created channel objects.
The subclasses must have a _METHOD_MAP class property, mapping
between AMQP method signatures and Python methods.
"""
def __init__(self, connection, channel_id):
self.is_closing = False
self.connection = connection
self.channel_id = channel_id
connection.channels[channel_id] = self
self.method_queue = [] # Higher level queue for methods
self.auto_decode = False
self._pending = {}
self._callbacks = {}
self._setup_listeners()
__slots__ = (
"is_closing",
"connection",
"channel_id",
"method_queue",
"auto_decode",
"_pending",
"_callbacks",
# adding '__dict__' to get dynamic assignment
"__dict__",
"__weakref__",
)
def __enter__(self):
return self
def __exit__(self, *exc_info):
self.close()
def send_method(self, sig,
format=None, args=None, content=None,
wait=None, callback=None, returns_tuple=False):
p = promise()
conn = self.connection
if conn is None:
raise RecoverableConnectionError('connection already closed')
args = dumps(format, args) if format else ''
try:
conn.frame_writer(1, self.channel_id, sig, args, content)
except StopIteration:
raise RecoverableConnectionError('connection already closed')
# TODO temp: callback should be after write_method ... ;)
if callback:
p.then(callback)
p()
if wait:
return self.wait(wait, returns_tuple=returns_tuple)
return p
def close(self):
"""Close this Channel or Connection."""
raise NotImplementedError('Must be overridden in subclass')
def wait(self, method, callback=None, timeout=None, returns_tuple=False):
p = ensure_promise(callback)
pending = self._pending
prev_p = []
if not isinstance(method, list):
method = [method]
for m in method:
prev_p.append(pending.get(m))
pending[m] = p
try:
while not p.ready:
self.connection.drain_events(timeout=timeout)
if p.value:
args, kwargs = p.value
args = args[1:] # We are not returning method back
return args if returns_tuple else (args and args[0])
finally:
for i, m in enumerate(method):
if prev_p[i] is not None:
pending[m] = prev_p[i]
else:
pending.pop(m, None)
def dispatch_method(self, method_sig, payload, content):
if self.is_closing and method_sig not in (
self._ALLOWED_METHODS_WHEN_CLOSING
):
# When channel.close() was called we must ignore all methods except
# Channel.close and Channel.CloseOk
AMQP_LOGGER.warning(
IGNORED_METHOD_DURING_CHANNEL_CLOSE,
method_sig, self.channel_id
)
return
if content and \
self.auto_decode and \
hasattr(content, 'content_encoding'):
try:
content.body = content.body.decode(content.content_encoding)
except Exception:
pass
try:
amqp_method = self._METHODS[method_sig]
except KeyError:
raise AMQPNotImplementedError(
f'Unknown AMQP method {method_sig!r}')
try:
listeners = [self._callbacks[method_sig]]
except KeyError:
listeners = []
one_shot = None
try:
one_shot = self._pending.pop(method_sig)
except KeyError:
if not listeners:
return
args = []
if amqp_method.args:
args, _ = loads(amqp_method.args, payload, 4)
if amqp_method.content:
args.append(content)
for listener in listeners:
listener(*args)
if one_shot:
one_shot(method_sig, *args)
#: Placeholder, the concrete implementations will have to
#: supply their own versions of _METHOD_MAP
_METHODS = {}

View File

@@ -1,122 +0,0 @@
"""AMQP Messages."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>
from .serialization import GenericContent
# Intended to fix #85: ImportError: cannot import name spec
# Encountered on python 2.7.3
# "The submodules often need to refer to each other. For example, the
# surround [sic] module might use the echo module. In fact, such
# references are so common that the import statement first looks in
# the containing package before looking in the standard module search
# path."
# Source:
# http://stackoverflow.com/a/14216937/4982251
from .spec import Basic
__all__ = ('Message',)
class Message(GenericContent):
"""A Message for use with the Channel.basic_* methods.
Expected arg types
body: string
children: (not supported)
Keyword properties may include:
content_type: shortstr
MIME content type
content_encoding: shortstr
MIME content encoding
application_headers: table
Message header field table, a dict with string keys,
and string | int | Decimal | datetime | dict values.
delivery_mode: octet
Non-persistent (1) or persistent (2)
priority: octet
The message priority, 0 to 9
correlation_id: shortstr
The application correlation identifier
reply_to: shortstr
The destination to reply to
expiration: shortstr
Message expiration specification
message_id: shortstr
The application message identifier
timestamp: unsigned long
The message timestamp
type: shortstr
The message type name
user_id: shortstr
The creating user id
app_id: shortstr
The creating application id
cluster_id: shortstr
Intra-cluster routing identifier
Unicode bodies are encoded according to the 'content_encoding'
argument. If that's None, it's set to 'UTF-8' automatically.
Example::
msg = Message('hello world',
content_type='text/plain',
application_headers={'foo': 7})
"""
CLASS_ID = Basic.CLASS_ID
#: Instances of this class have these attributes, which
#: are passed back and forth as message properties between
#: client and server
PROPERTIES = [
('content_type', 's'),
('content_encoding', 's'),
('application_headers', 'F'),
('delivery_mode', 'o'),
('priority', 'o'),
('correlation_id', 's'),
('reply_to', 's'),
('expiration', 's'),
('message_id', 's'),
('timestamp', 'L'),
('type', 's'),
('user_id', 's'),
('app_id', 's'),
('cluster_id', 's')
]
def __init__(self, body='', children=None, channel=None, **properties):
super().__init__(**properties)
#: set by basic_consume/basic_get
self.delivery_info = None
self.body = body
self.channel = channel
__slots__ = (
"delivery_info",
"body",
"channel",
)
@property
def headers(self):
return self.properties.get('application_headers')
@property
def delivery_tag(self):
return self.delivery_info.get('delivery_tag')

File diff suppressed because it is too large Load Diff

View File

@@ -1,784 +0,0 @@
"""AMQP Connections."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>
import logging
import socket
import uuid
import warnings
from array import array
from time import monotonic
from vine import ensure_promise
from . import __version__, sasl, spec
from .abstract_channel import AbstractChannel
from .channel import Channel
from .exceptions import (AMQPDeprecationWarning, ChannelError, ConnectionError,
ConnectionForced, MessageNacked, RecoverableChannelError,
RecoverableConnectionError, ResourceError,
error_for_code)
from .method_framing import frame_handler, frame_writer
from .transport import Transport
try:
from ssl import SSLError
except ImportError: # pragma: no cover
class SSLError(Exception): # noqa
pass
W_FORCE_CONNECT = """\
The .{attr} attribute on the connection was accessed before
the connection was established. This is supported for now, but will
be deprecated in amqp 2.2.0.
Since amqp 2.0 you have to explicitly call Connection.connect()
before using the connection.
"""
START_DEBUG_FMT = """
Start from server, version: %d.%d, properties: %s, mechanisms: %s, locales: %s
""".strip()
__all__ = ('Connection',)
AMQP_LOGGER = logging.getLogger('amqp')
AMQP_HEARTBEAT_LOGGER = logging.getLogger(
'amqp.connection.Connection.heartbeat_tick'
)
#: Default map for :attr:`Connection.library_properties`
LIBRARY_PROPERTIES = {
'product': 'py-amqp',
'product_version': __version__,
}
#: Default map for :attr:`Connection.negotiate_capabilities`
NEGOTIATE_CAPABILITIES = {
'consumer_cancel_notify': True,
'connection.blocked': True,
'authentication_failure_close': True,
}
class Connection(AbstractChannel):
"""AMQP Connection.
The connection class provides methods for a client to establish a
network connection to a server, and for both peers to operate the
connection thereafter.
GRAMMAR::
connection = open-connection *use-connection close-connection
open-connection = C:protocol-header
S:START C:START-OK
*challenge
S:TUNE C:TUNE-OK
C:OPEN S:OPEN-OK
challenge = S:SECURE C:SECURE-OK
use-connection = *channel
close-connection = C:CLOSE S:CLOSE-OK
/ S:CLOSE C:CLOSE-OK
Create a connection to the specified host, which should be
a 'host[:port]', such as 'localhost', or '1.2.3.4:5672'
(defaults to 'localhost', if a port is not specified then
5672 is used)
Authentication can be controlled by passing one or more
`amqp.sasl.SASL` instances as the `authentication` parameter, or
setting the `login_method` string to one of the supported methods:
'GSSAPI', 'EXTERNAL', 'AMQPLAIN', or 'PLAIN'.
Otherwise authentication will be performed using any supported method
preferred by the server. Userid and passwords apply to AMQPLAIN and
PLAIN authentication, whereas on GSSAPI only userid will be used as the
client name. For EXTERNAL authentication both userid and password are
ignored.
The 'ssl' parameter may be simply True/False, or
a dictionary of options to pass to :class:`ssl.SSLContext` such as
requiring certain certificates. For details, refer ``ssl`` parameter of
:class:`~amqp.transport.SSLTransport`.
The "socket_settings" parameter is a dictionary defining tcp
settings which will be applied as socket options.
When "confirm_publish" is set to True, the channel is put to
confirm mode. In this mode, each published message is
confirmed using Publisher confirms RabbitMQ extension.
"""
Channel = Channel
#: Mapping of protocol extensions to enable.
#: The server will report these in server_properties[capabilities],
#: and if a key in this map is present the client will tell the
#: server to either enable or disable the capability depending
#: on the value set in this map.
#: For example with:
#: negotiate_capabilities = {
#: 'consumer_cancel_notify': True,
#: }
#: The client will enable this capability if the server reports
#: support for it, but if the value is False the client will
#: disable the capability.
negotiate_capabilities = NEGOTIATE_CAPABILITIES
#: These are sent to the server to announce what features
#: we support, type of client etc.
library_properties = LIBRARY_PROPERTIES
#: Final heartbeat interval value (in float seconds) after negotiation
heartbeat = None
#: Original heartbeat interval value proposed by client.
client_heartbeat = None
#: Original heartbeat interval proposed by server.
server_heartbeat = None
#: Time of last heartbeat sent (in monotonic time, if available).
last_heartbeat_sent = 0
#: Time of last heartbeat received (in monotonic time, if available).
last_heartbeat_received = 0
#: Number of successful writes to socket.
bytes_sent = 0
#: Number of successful reads from socket.
bytes_recv = 0
#: Number of bytes sent to socket at the last heartbeat check.
prev_sent = None
#: Number of bytes received from socket at the last heartbeat check.
prev_recv = None
_METHODS = {
spec.method(spec.Connection.Start, 'ooFSS'),
spec.method(spec.Connection.OpenOk),
spec.method(spec.Connection.Secure, 's'),
spec.method(spec.Connection.Tune, 'BlB'),
spec.method(spec.Connection.Close, 'BsBB'),
spec.method(spec.Connection.Blocked),
spec.method(spec.Connection.Unblocked),
spec.method(spec.Connection.CloseOk),
}
_METHODS = {m.method_sig: m for m in _METHODS}
_ALLOWED_METHODS_WHEN_CLOSING = (
spec.Connection.Close, spec.Connection.CloseOk
)
connection_errors = (
ConnectionError,
socket.error,
IOError,
OSError,
)
channel_errors = (ChannelError,)
recoverable_connection_errors = (
RecoverableConnectionError,
MessageNacked,
socket.error,
IOError,
OSError,
)
recoverable_channel_errors = (
RecoverableChannelError,
)
def __init__(self, host='localhost:5672', userid='guest', password='guest',
login_method=None, login_response=None,
authentication=(),
virtual_host='/', locale='en_US', client_properties=None,
ssl=False, connect_timeout=None, channel_max=None,
frame_max=None, heartbeat=0, on_open=None, on_blocked=None,
on_unblocked=None, confirm_publish=False,
on_tune_ok=None, read_timeout=None, write_timeout=None,
socket_settings=None, frame_handler=frame_handler,
frame_writer=frame_writer, **kwargs):
self._connection_id = uuid.uuid4().hex
channel_max = channel_max or 65535
frame_max = frame_max or 131072
if authentication:
if isinstance(authentication, sasl.SASL):
authentication = (authentication,)
self.authentication = authentication
elif login_method is not None:
if login_method == 'GSSAPI':
auth = sasl.GSSAPI(userid)
elif login_method == 'EXTERNAL':
auth = sasl.EXTERNAL()
elif login_method == 'AMQPLAIN':
if userid is None or password is None:
raise ValueError(
"Must supply authentication or userid/password")
auth = sasl.AMQPLAIN(userid, password)
elif login_method == 'PLAIN':
if userid is None or password is None:
raise ValueError(
"Must supply authentication or userid/password")
auth = sasl.PLAIN(userid, password)
elif login_response is not None:
auth = sasl.RAW(login_method, login_response)
else:
raise ValueError("Invalid login method", login_method)
self.authentication = (auth,)
else:
self.authentication = (sasl.GSSAPI(userid, fail_soft=True),
sasl.EXTERNAL(),
sasl.AMQPLAIN(userid, password),
sasl.PLAIN(userid, password))
self.client_properties = dict(
self.library_properties, **client_properties or {}
)
self.locale = locale
self.host = host
self.virtual_host = virtual_host
self.on_tune_ok = ensure_promise(on_tune_ok)
self.frame_handler_cls = frame_handler
self.frame_writer_cls = frame_writer
self._handshake_complete = False
self.channels = {}
# The connection object itself is treated as channel 0
super().__init__(self, 0)
self._frame_writer = None
self._on_inbound_frame = None
self._transport = None
# Properties set in the Tune method
self.channel_max = channel_max
self.frame_max = frame_max
self.client_heartbeat = heartbeat
self.confirm_publish = confirm_publish
self.ssl = ssl
self.read_timeout = read_timeout
self.write_timeout = write_timeout
self.socket_settings = socket_settings
# Callbacks
self.on_blocked = on_blocked
self.on_unblocked = on_unblocked
self.on_open = ensure_promise(on_open)
self._used_channel_ids = array('H')
# Properties set in the Start method
self.version_major = 0
self.version_minor = 0
self.server_properties = {}
self.mechanisms = []
self.locales = []
self.connect_timeout = connect_timeout
def __repr__(self):
if self._transport:
return f'<AMQP Connection: {self.host}/{self.virtual_host} '\
f'using {self._transport} at {id(self):#x}>'
else:
return f'<AMQP Connection: {self.host}/{self.virtual_host} '\
f'(disconnected) at {id(self):#x}>'
def __enter__(self):
self.connect()
return self
def __exit__(self, *eargs):
self.close()
def then(self, on_success, on_error=None):
return self.on_open.then(on_success, on_error)
def _setup_listeners(self):
self._callbacks.update({
spec.Connection.Start: self._on_start,
spec.Connection.OpenOk: self._on_open_ok,
spec.Connection.Secure: self._on_secure,
spec.Connection.Tune: self._on_tune,
spec.Connection.Close: self._on_close,
spec.Connection.Blocked: self._on_blocked,
spec.Connection.Unblocked: self._on_unblocked,
spec.Connection.CloseOk: self._on_close_ok,
})
def connect(self, callback=None):
# Let the transport.py module setup the actual
# socket connection to the broker.
#
if self.connected:
return callback() if callback else None
try:
self.transport = self.Transport(
self.host, self.connect_timeout, self.ssl,
self.read_timeout, self.write_timeout,
socket_settings=self.socket_settings,
)
self.transport.connect()
self.on_inbound_frame = self.frame_handler_cls(
self, self.on_inbound_method)
self.frame_writer = self.frame_writer_cls(self, self.transport)
while not self._handshake_complete:
self.drain_events(timeout=self.connect_timeout)
except (OSError, SSLError):
self.collect()
raise
def _warn_force_connect(self, attr):
warnings.warn(AMQPDeprecationWarning(
W_FORCE_CONNECT.format(attr=attr)))
@property
def transport(self):
if self._transport is None:
self._warn_force_connect('transport')
self.connect()
return self._transport
@transport.setter
def transport(self, transport):
self._transport = transport
@property
def on_inbound_frame(self):
if self._on_inbound_frame is None:
self._warn_force_connect('on_inbound_frame')
self.connect()
return self._on_inbound_frame
@on_inbound_frame.setter
def on_inbound_frame(self, on_inbound_frame):
self._on_inbound_frame = on_inbound_frame
@property
def frame_writer(self):
if self._frame_writer is None:
self._warn_force_connect('frame_writer')
self.connect()
return self._frame_writer
@frame_writer.setter
def frame_writer(self, frame_writer):
self._frame_writer = frame_writer
def _on_start(self, version_major, version_minor, server_properties,
mechanisms, locales, argsig='FsSs'):
client_properties = self.client_properties
self.version_major = version_major
self.version_minor = version_minor
self.server_properties = server_properties
if isinstance(mechanisms, str):
mechanisms = mechanisms.encode('utf-8')
self.mechanisms = mechanisms.split(b' ')
self.locales = locales.split(' ')
AMQP_LOGGER.debug(
START_DEBUG_FMT,
self.version_major, self.version_minor,
self.server_properties, self.mechanisms, self.locales,
)
# Negotiate protocol extensions (capabilities)
scap = server_properties.get('capabilities') or {}
cap = client_properties.setdefault('capabilities', {})
cap.update({
wanted_cap: enable_cap
for wanted_cap, enable_cap in self.negotiate_capabilities.items()
if scap.get(wanted_cap)
})
if not cap:
# no capabilities, server may not react well to having
# this key present in client_properties, so we remove it.
client_properties.pop('capabilities', None)
for authentication in self.authentication:
if authentication.mechanism in self.mechanisms:
login_response = authentication.start(self)
if login_response is not NotImplemented:
break
else:
raise ConnectionError(
"Couldn't find appropriate auth mechanism "
"(can offer: {}; available: {})".format(
b", ".join(m.mechanism
for m in self.authentication
if m.mechanism).decode(),
b", ".join(self.mechanisms).decode()))
self.send_method(
spec.Connection.StartOk, argsig,
(client_properties, authentication.mechanism,
login_response, self.locale),
)
def _on_secure(self, challenge):
pass
def _on_tune(self, channel_max, frame_max, server_heartbeat, argsig='BlB'):
client_heartbeat = self.client_heartbeat or 0
self.channel_max = channel_max or self.channel_max
self.frame_max = frame_max or self.frame_max
self.server_heartbeat = server_heartbeat or 0
# negotiate the heartbeat interval to the smaller of the
# specified values
if self.server_heartbeat == 0 or client_heartbeat == 0:
self.heartbeat = max(self.server_heartbeat, client_heartbeat)
else:
self.heartbeat = min(self.server_heartbeat, client_heartbeat)
# Ignore server heartbeat if client_heartbeat is disabled
if not self.client_heartbeat:
self.heartbeat = 0
self.send_method(
spec.Connection.TuneOk, argsig,
(self.channel_max, self.frame_max, self.heartbeat),
callback=self._on_tune_sent,
)
def _on_tune_sent(self, argsig='ssb'):
self.send_method(
spec.Connection.Open, argsig, (self.virtual_host, '', False),
)
def _on_open_ok(self):
self._handshake_complete = True
self.on_open(self)
def Transport(self, host, connect_timeout,
ssl=False, read_timeout=None, write_timeout=None,
socket_settings=None, **kwargs):
return Transport(
host, connect_timeout=connect_timeout, ssl=ssl,
read_timeout=read_timeout, write_timeout=write_timeout,
socket_settings=socket_settings, **kwargs)
@property
def connected(self):
return self._transport and self._transport.connected
def collect(self):
if self._transport:
self._transport.close()
if self.channels:
# Copy all the channels except self since the channels
# dictionary changes during the collection process.
channels = [
ch for ch in self.channels.values()
if ch is not self
]
for ch in channels:
ch.collect()
self._transport = self.connection = self.channels = None
def _get_free_channel_id(self):
# Cast to a set for fast lookups, and keep stored as an array for lower memory usage.
used_channel_ids = set(self._used_channel_ids)
for channel_id in range(1, self.channel_max + 1):
if channel_id not in used_channel_ids:
self._used_channel_ids.append(channel_id)
return channel_id
raise ResourceError(
'No free channel ids, current={}, channel_max={}'.format(
len(self.channels), self.channel_max), spec.Channel.Open)
def _claim_channel_id(self, channel_id):
if channel_id in self._used_channel_ids:
raise ConnectionError(f'Channel {channel_id!r} already open')
else:
self._used_channel_ids.append(channel_id)
return channel_id
def channel(self, channel_id=None, callback=None):
"""Create new channel.
Fetch a Channel object identified by the numeric channel_id, or
create that object if it doesn't already exist.
"""
if self.channels is None:
raise RecoverableConnectionError('Connection already closed.')
try:
return self.channels[channel_id]
except KeyError:
channel = self.Channel(self, channel_id, on_open=callback)
channel.open()
return channel
def is_alive(self):
raise NotImplementedError('Use AMQP heartbeats')
def drain_events(self, timeout=None):
# read until message is ready
while not self.blocking_read(timeout):
pass
def blocking_read(self, timeout=None):
with self.transport.having_timeout(timeout):
frame = self.transport.read_frame()
return self.on_inbound_frame(frame)
def on_inbound_method(self, channel_id, method_sig, payload, content):
if self.channels is None:
raise RecoverableConnectionError('Connection already closed')
return self.channels[channel_id].dispatch_method(
method_sig, payload, content,
)
def close(self, reply_code=0, reply_text='', method_sig=(0, 0),
argsig='BsBB'):
"""Request a connection close.
This method indicates that the sender wants to close the
connection. This may be due to internal conditions (e.g. a
forced shut-down) or due to an error handling a specific
method, i.e. an exception. When a close is due to an
exception, the sender provides the class and method id of the
method which caused the exception.
RULE:
After sending this method any received method except the
Close-OK method MUST be discarded.
RULE:
The peer sending this method MAY use a counter or timeout
to detect failure of the other peer to respond correctly
with the Close-OK method.
RULE:
When a server receives the Close method from a client it
MUST delete all server-side resources associated with the
client's context. A client CANNOT reconnect to a context
after sending or receiving a Close method.
PARAMETERS:
reply_code: short
The reply code. The AMQ reply codes are defined in AMQ
RFC 011.
reply_text: shortstr
The localised reply text. This text can be logged as an
aid to resolving issues.
class_id: short
failing method class
When the close is provoked by a method exception, this
is the class of the method.
method_id: short
failing method ID
When the close is provoked by a method exception, this
is the ID of the method.
"""
if self._transport is None:
# already closed
return
try:
self.is_closing = True
return self.send_method(
spec.Connection.Close, argsig,
(reply_code, reply_text, method_sig[0], method_sig[1]),
wait=spec.Connection.CloseOk,
)
except (OSError, SSLError):
# close connection
self.collect()
raise
finally:
self.is_closing = False
def _on_close(self, reply_code, reply_text, class_id, method_id):
"""Request a connection close.
This method indicates that the sender wants to close the
connection. This may be due to internal conditions (e.g. a
forced shut-down) or due to an error handling a specific
method, i.e. an exception. When a close is due to an
exception, the sender provides the class and method id of the
method which caused the exception.
RULE:
After sending this method any received method except the
Close-OK method MUST be discarded.
RULE:
The peer sending this method MAY use a counter or timeout
to detect failure of the other peer to respond correctly
with the Close-OK method.
RULE:
When a server receives the Close method from a client it
MUST delete all server-side resources associated with the
client's context. A client CANNOT reconnect to a context
after sending or receiving a Close method.
PARAMETERS:
reply_code: short
The reply code. The AMQ reply codes are defined in AMQ
RFC 011.
reply_text: shortstr
The localised reply text. This text can be logged as an
aid to resolving issues.
class_id: short
failing method class
When the close is provoked by a method exception, this
is the class of the method.
method_id: short
failing method ID
When the close is provoked by a method exception, this
is the ID of the method.
"""
self._x_close_ok()
raise error_for_code(reply_code, reply_text,
(class_id, method_id), ConnectionError)
def _x_close_ok(self):
"""Confirm a connection close.
This method confirms a Connection.Close method and tells the
recipient that it is safe to release resources for the
connection and close the socket.
RULE:
A peer that detects a socket closure without having
received a Close-Ok handshake method SHOULD log the error.
"""
self.send_method(spec.Connection.CloseOk, callback=self._on_close_ok)
def _on_close_ok(self):
"""Confirm a connection close.
This method confirms a Connection.Close method and tells the
recipient that it is safe to release resources for the
connection and close the socket.
RULE:
A peer that detects a socket closure without having
received a Close-Ok handshake method SHOULD log the error.
"""
self.collect()
def _on_blocked(self):
"""Callback called when connection blocked.
Notes:
This is an RabbitMQ Extension.
"""
reason = 'connection blocked, see broker logs'
if self.on_blocked:
return self.on_blocked(reason)
def _on_unblocked(self):
if self.on_unblocked:
return self.on_unblocked()
def send_heartbeat(self):
self.frame_writer(8, 0, None, None, None)
def heartbeat_tick(self, rate=2):
"""Send heartbeat packets if necessary.
Raises:
~amqp.exceptions.ConnectionForvced: if none have been
received recently.
Note:
This should be called frequently, on the order of
once per second.
Keyword Arguments:
rate (int): Number of heartbeat frames to send during the heartbeat
timeout
"""
AMQP_HEARTBEAT_LOGGER.debug('heartbeat_tick : for connection %s',
self._connection_id)
if not self.heartbeat:
return
# If rate is wrong, let's use 2 as default
if rate <= 0:
rate = 2
# treat actual data exchange in either direction as a heartbeat
sent_now = self.bytes_sent
recv_now = self.bytes_recv
if self.prev_sent is None or self.prev_sent != sent_now:
self.last_heartbeat_sent = monotonic()
if self.prev_recv is None or self.prev_recv != recv_now:
self.last_heartbeat_received = monotonic()
now = monotonic()
AMQP_HEARTBEAT_LOGGER.debug(
'heartbeat_tick : Prev sent/recv: %s/%s, '
'now - %s/%s, monotonic - %s, '
'last_heartbeat_sent - %s, heartbeat int. - %s '
'for connection %s',
self.prev_sent, self.prev_recv,
sent_now, recv_now, now,
self.last_heartbeat_sent,
self.heartbeat,
self._connection_id,
)
self.prev_sent, self.prev_recv = sent_now, recv_now
# send a heartbeat if it's time to do so
if now > self.last_heartbeat_sent + self.heartbeat / rate:
AMQP_HEARTBEAT_LOGGER.debug(
'heartbeat_tick: sending heartbeat for connection %s',
self._connection_id)
self.send_heartbeat()
self.last_heartbeat_sent = monotonic()
# if we've missed two intervals' heartbeats, fail; this gives the
# server enough time to send heartbeats a little late
two_heartbeats = 2 * self.heartbeat
two_heartbeats_interval = self.last_heartbeat_received + two_heartbeats
heartbeats_missed = two_heartbeats_interval < monotonic()
if self.last_heartbeat_received and heartbeats_missed:
raise ConnectionForced('Too many heartbeats missed')
@property
def sock(self):
return self.transport.sock
@property
def server_capabilities(self):
return self.server_properties.get('capabilities') or {}

View File

@@ -1,288 +0,0 @@
"""Exceptions used by amqp."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>
from struct import pack, unpack
__all__ = (
'AMQPError',
'ConnectionError', 'ChannelError',
'RecoverableConnectionError', 'IrrecoverableConnectionError',
'RecoverableChannelError', 'IrrecoverableChannelError',
'ConsumerCancelled', 'ContentTooLarge', 'NoConsumers',
'ConnectionForced', 'InvalidPath', 'AccessRefused', 'NotFound',
'ResourceLocked', 'PreconditionFailed', 'FrameError', 'FrameSyntaxError',
'InvalidCommand', 'ChannelNotOpen', 'UnexpectedFrame', 'ResourceError',
'NotAllowed', 'AMQPNotImplementedError', 'InternalError',
'MessageNacked',
'AMQPDeprecationWarning',
)
class AMQPDeprecationWarning(UserWarning):
"""Warning for deprecated things."""
class MessageNacked(Exception):
"""Message was nacked by broker."""
class AMQPError(Exception):
"""Base class for all AMQP exceptions."""
code = 0
def __init__(self, reply_text=None, method_sig=None,
method_name=None, reply_code=None):
self.message = reply_text
self.reply_code = reply_code or self.code
self.reply_text = reply_text
self.method_sig = method_sig
self.method_name = method_name or ''
if method_sig and not self.method_name:
self.method_name = METHOD_NAME_MAP.get(method_sig, '')
Exception.__init__(self, reply_code,
reply_text, method_sig, self.method_name)
def __str__(self):
if self.method:
return '{0.method}: ({0.reply_code}) {0.reply_text}'.format(self)
return self.reply_text or '<{}: unknown error>'.format(
type(self).__name__
)
@property
def method(self):
return self.method_name or self.method_sig
class ConnectionError(AMQPError):
"""AMQP Connection Error."""
class ChannelError(AMQPError):
"""AMQP Channel Error."""
class RecoverableChannelError(ChannelError):
"""Exception class for recoverable channel errors."""
class IrrecoverableChannelError(ChannelError):
"""Exception class for irrecoverable channel errors."""
class RecoverableConnectionError(ConnectionError):
"""Exception class for recoverable connection errors."""
class IrrecoverableConnectionError(ConnectionError):
"""Exception class for irrecoverable connection errors."""
class Blocked(RecoverableConnectionError):
"""AMQP Connection Blocked Predicate."""
class ConsumerCancelled(RecoverableConnectionError):
"""AMQP Consumer Cancelled Predicate."""
class ContentTooLarge(RecoverableChannelError):
"""AMQP Content Too Large Error."""
code = 311
class NoConsumers(RecoverableChannelError):
"""AMQP No Consumers Error."""
code = 313
class ConnectionForced(RecoverableConnectionError):
"""AMQP Connection Forced Error."""
code = 320
class InvalidPath(IrrecoverableConnectionError):
"""AMQP Invalid Path Error."""
code = 402
class AccessRefused(IrrecoverableChannelError):
"""AMQP Access Refused Error."""
code = 403
class NotFound(IrrecoverableChannelError):
"""AMQP Not Found Error."""
code = 404
class ResourceLocked(RecoverableChannelError):
"""AMQP Resource Locked Error."""
code = 405
class PreconditionFailed(IrrecoverableChannelError):
"""AMQP Precondition Failed Error."""
code = 406
class FrameError(IrrecoverableConnectionError):
"""AMQP Frame Error."""
code = 501
class FrameSyntaxError(IrrecoverableConnectionError):
"""AMQP Frame Syntax Error."""
code = 502
class InvalidCommand(IrrecoverableConnectionError):
"""AMQP Invalid Command Error."""
code = 503
class ChannelNotOpen(IrrecoverableConnectionError):
"""AMQP Channel Not Open Error."""
code = 504
class UnexpectedFrame(IrrecoverableConnectionError):
"""AMQP Unexpected Frame."""
code = 505
class ResourceError(RecoverableConnectionError):
"""AMQP Resource Error."""
code = 506
class NotAllowed(IrrecoverableConnectionError):
"""AMQP Not Allowed Error."""
code = 530
class AMQPNotImplementedError(IrrecoverableConnectionError):
"""AMQP Not Implemented Error."""
code = 540
class InternalError(IrrecoverableConnectionError):
"""AMQP Internal Error."""
code = 541
ERROR_MAP = {
311: ContentTooLarge,
313: NoConsumers,
320: ConnectionForced,
402: InvalidPath,
403: AccessRefused,
404: NotFound,
405: ResourceLocked,
406: PreconditionFailed,
501: FrameError,
502: FrameSyntaxError,
503: InvalidCommand,
504: ChannelNotOpen,
505: UnexpectedFrame,
506: ResourceError,
530: NotAllowed,
540: AMQPNotImplementedError,
541: InternalError,
}
def error_for_code(code, text, method, default):
try:
return ERROR_MAP[code](text, method, reply_code=code)
except KeyError:
return default(text, method, reply_code=code)
METHOD_NAME_MAP = {
(10, 10): 'Connection.start',
(10, 11): 'Connection.start_ok',
(10, 20): 'Connection.secure',
(10, 21): 'Connection.secure_ok',
(10, 30): 'Connection.tune',
(10, 31): 'Connection.tune_ok',
(10, 40): 'Connection.open',
(10, 41): 'Connection.open_ok',
(10, 50): 'Connection.close',
(10, 51): 'Connection.close_ok',
(20, 10): 'Channel.open',
(20, 11): 'Channel.open_ok',
(20, 20): 'Channel.flow',
(20, 21): 'Channel.flow_ok',
(20, 40): 'Channel.close',
(20, 41): 'Channel.close_ok',
(30, 10): 'Access.request',
(30, 11): 'Access.request_ok',
(40, 10): 'Exchange.declare',
(40, 11): 'Exchange.declare_ok',
(40, 20): 'Exchange.delete',
(40, 21): 'Exchange.delete_ok',
(40, 30): 'Exchange.bind',
(40, 31): 'Exchange.bind_ok',
(40, 40): 'Exchange.unbind',
(40, 41): 'Exchange.unbind_ok',
(50, 10): 'Queue.declare',
(50, 11): 'Queue.declare_ok',
(50, 20): 'Queue.bind',
(50, 21): 'Queue.bind_ok',
(50, 30): 'Queue.purge',
(50, 31): 'Queue.purge_ok',
(50, 40): 'Queue.delete',
(50, 41): 'Queue.delete_ok',
(50, 50): 'Queue.unbind',
(50, 51): 'Queue.unbind_ok',
(60, 10): 'Basic.qos',
(60, 11): 'Basic.qos_ok',
(60, 20): 'Basic.consume',
(60, 21): 'Basic.consume_ok',
(60, 30): 'Basic.cancel',
(60, 31): 'Basic.cancel_ok',
(60, 40): 'Basic.publish',
(60, 50): 'Basic.return',
(60, 60): 'Basic.deliver',
(60, 70): 'Basic.get',
(60, 71): 'Basic.get_ok',
(60, 72): 'Basic.get_empty',
(60, 80): 'Basic.ack',
(60, 90): 'Basic.reject',
(60, 100): 'Basic.recover_async',
(60, 110): 'Basic.recover',
(60, 111): 'Basic.recover_ok',
(60, 120): 'Basic.nack',
(90, 10): 'Tx.select',
(90, 11): 'Tx.select_ok',
(90, 20): 'Tx.commit',
(90, 21): 'Tx.commit_ok',
(90, 30): 'Tx.rollback',
(90, 31): 'Tx.rollback_ok',
(85, 10): 'Confirm.select',
(85, 11): 'Confirm.select_ok',
}
for _method_id, _method_name in list(METHOD_NAME_MAP.items()):
METHOD_NAME_MAP[unpack('>I', pack('>HH', *_method_id))[0]] = \
_method_name

View File

@@ -1,189 +0,0 @@
"""Convert between frames and higher-level AMQP methods."""
# Copyright (C) 2007-2008 Barry Pederson <bp@barryp.org>
from collections import defaultdict
from struct import pack, pack_into, unpack_from
from . import spec
from .basic_message import Message
from .exceptions import UnexpectedFrame
from .utils import str_to_bytes
__all__ = ('frame_handler', 'frame_writer')
#: Set of methods that require both a content frame and a body frame.
_CONTENT_METHODS = frozenset([
spec.Basic.Return,
spec.Basic.Deliver,
spec.Basic.GetOk,
])
#: Number of bytes reserved for protocol in a content frame.
#: We use this to calculate when a frame exceeeds the max frame size,
#: and if it does not the message will fit into the preallocated buffer.
FRAME_OVERHEAD = 40
def frame_handler(connection, callback,
unpack_from=unpack_from, content_methods=_CONTENT_METHODS):
"""Create closure that reads frames."""
expected_types = defaultdict(lambda: 1)
partial_messages = {}
def on_frame(frame):
frame_type, channel, buf = frame
connection.bytes_recv += 1
if frame_type not in (expected_types[channel], 8):
raise UnexpectedFrame(
'Received frame {} while expecting type: {}'.format(
frame_type, expected_types[channel]),
)
elif frame_type == 1:
method_sig = unpack_from('>HH', buf, 0)
if method_sig in content_methods:
# Save what we've got so far and wait for the content-header
partial_messages[channel] = Message(
frame_method=method_sig, frame_args=buf,
)
expected_types[channel] = 2
return False
callback(channel, method_sig, buf, None)
elif frame_type == 2:
msg = partial_messages[channel]
msg.inbound_header(buf)
if not msg.ready:
# wait for the content-body
expected_types[channel] = 3
return False
# bodyless message, we're done
expected_types[channel] = 1
partial_messages.pop(channel, None)
callback(channel, msg.frame_method, msg.frame_args, msg)
elif frame_type == 3:
msg = partial_messages[channel]
msg.inbound_body(buf)
if not msg.ready:
# wait for the rest of the content-body
return False
expected_types[channel] = 1
partial_messages.pop(channel, None)
callback(channel, msg.frame_method, msg.frame_args, msg)
elif frame_type == 8:
# bytes_recv already updated
return False
return True
return on_frame
class Buffer:
def __init__(self, buf):
self.buf = buf
@property
def buf(self):
return self._buf
@buf.setter
def buf(self, buf):
self._buf = buf
# Using a memoryview allows slicing without copying underlying data.
# Slicing this is much faster than slicing the bytearray directly.
# More details: https://stackoverflow.com/a/34257357
self.view = memoryview(buf)
def frame_writer(connection, transport,
pack=pack, pack_into=pack_into, range=range, len=len,
bytes=bytes, str_to_bytes=str_to_bytes, text_t=str):
"""Create closure that writes frames."""
write = transport.write
buffer_store = Buffer(bytearray(connection.frame_max - 8))
def write_frame(type_, channel, method_sig, args, content):
chunk_size = connection.frame_max - 8
offset = 0
properties = None
args = str_to_bytes(args)
if content:
body = content.body
if isinstance(body, str):
encoding = content.properties.setdefault(
'content_encoding', 'utf-8')
body = body.encode(encoding)
properties = content._serialize_properties()
bodylen = len(body)
properties_len = len(properties) or 0
framelen = len(args) + properties_len + bodylen + FRAME_OVERHEAD
bigbody = framelen > chunk_size
else:
body, bodylen, bigbody = None, 0, 0
if bigbody:
# ## SLOW: string copy and write for every frame
frame = (b''.join([pack('>HH', *method_sig), args])
if type_ == 1 else b'') # encode method frame
framelen = len(frame)
write(pack('>BHI%dsB' % framelen,
type_, channel, framelen, frame, 0xce))
if body:
frame = b''.join([
pack('>HHQ', method_sig[0], 0, len(body)),
properties,
])
framelen = len(frame)
write(pack('>BHI%dsB' % framelen,
2, channel, framelen, frame, 0xce))
for i in range(0, bodylen, chunk_size):
frame = body[i:i + chunk_size]
framelen = len(frame)
write(pack('>BHI%dsB' % framelen,
3, channel, framelen,
frame, 0xce))
else:
# frame_max can be updated via connection._on_tune. If
# it became larger, then we need to resize the buffer
# to prevent overflow.
if chunk_size > len(buffer_store.buf):
buffer_store.buf = bytearray(chunk_size)
buf = buffer_store.buf
# ## FAST: pack into buffer and single write
frame = (b''.join([pack('>HH', *method_sig), args])
if type_ == 1 else b'')
framelen = len(frame)
pack_into('>BHI%dsB' % framelen, buf, offset,
type_, channel, framelen, frame, 0xce)
offset += 8 + framelen
if body is not None:
frame = b''.join([
pack('>HHQ', method_sig[0], 0, len(body)),
properties,
])
framelen = len(frame)
pack_into('>BHI%dsB' % framelen, buf, offset,
2, channel, framelen, frame, 0xce)
offset += 8 + framelen
bodylen = len(body)
if bodylen > 0:
framelen = bodylen
pack_into('>BHI%dsB' % framelen, buf, offset,
3, channel, framelen, body, 0xce)
offset += 8 + framelen
write(buffer_store.view[:offset])
connection.bytes_sent += 1
return write_frame

View File

@@ -1,79 +0,0 @@
"""Platform compatibility."""
import platform
import re
import sys
# Jython does not have this attribute
import typing
try:
from socket import SOL_TCP
except ImportError: # pragma: no cover
from socket import IPPROTO_TCP as SOL_TCP # noqa
RE_NUM = re.compile(r'(\d+).+')
def _linux_version_to_tuple(s: str) -> typing.Tuple[int, int, int]:
return tuple(map(_versionatom, s.split('.')[:3]))
def _versionatom(s: str) -> int:
if s.isdigit():
return int(s)
match = RE_NUM.match(s)
return int(match.groups()[0]) if match else 0
# available socket options for TCP level
KNOWN_TCP_OPTS = {
'TCP_CORK', 'TCP_DEFER_ACCEPT', 'TCP_KEEPCNT',
'TCP_KEEPIDLE', 'TCP_KEEPINTVL', 'TCP_LINGER2',
'TCP_MAXSEG', 'TCP_NODELAY', 'TCP_QUICKACK',
'TCP_SYNCNT', 'TCP_USER_TIMEOUT', 'TCP_WINDOW_CLAMP',
}
LINUX_VERSION = None
if sys.platform.startswith('linux'):
LINUX_VERSION = _linux_version_to_tuple(platform.release())
if LINUX_VERSION < (2, 6, 37):
KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT')
# Windows Subsystem for Linux is an edge-case: the Python socket library
# returns most TCP_* enums, but they aren't actually supported
if platform.release().endswith("Microsoft"):
KNOWN_TCP_OPTS = {'TCP_NODELAY', 'TCP_KEEPIDLE', 'TCP_KEEPINTVL',
'TCP_KEEPCNT'}
elif sys.platform.startswith('darwin'):
KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT')
elif 'bsd' in sys.platform:
KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT')
# According to MSDN Windows platforms support getsockopt(TCP_MAXSSEG) but not
# setsockopt(TCP_MAXSEG) on IPPROTO_TCP sockets.
elif sys.platform.startswith('win'):
KNOWN_TCP_OPTS = {'TCP_NODELAY'}
elif sys.platform.startswith('cygwin'):
KNOWN_TCP_OPTS = {'TCP_NODELAY'}
# illumos does not allow to set the TCP_MAXSEG socket option,
# even if the Oracle documentation says otherwise.
# TCP_USER_TIMEOUT does not exist on Solaris 11.4
elif sys.platform.startswith('sunos'):
KNOWN_TCP_OPTS.remove('TCP_MAXSEG')
KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT')
# aix does not allow to set the TCP_MAXSEG
# or the TCP_USER_TIMEOUT socket options.
elif sys.platform.startswith('aix'):
KNOWN_TCP_OPTS.remove('TCP_MAXSEG')
KNOWN_TCP_OPTS.remove('TCP_USER_TIMEOUT')
__all__ = (
'LINUX_VERSION',
'SOL_TCP',
'KNOWN_TCP_OPTS',
)

View File

@@ -1,12 +0,0 @@
"""Protocol data."""
from collections import namedtuple
queue_declare_ok_t = namedtuple(
'queue_declare_ok_t', ('queue', 'message_count', 'consumer_count'),
)
basic_return_t = namedtuple(
'basic_return_t',
('reply_code', 'reply_text', 'exchange', 'routing_key', 'message'),
)

View File

@@ -1,191 +0,0 @@
"""SASL mechanisms for AMQP authentication."""
import socket
import warnings
from io import BytesIO
from amqp.serialization import _write_table
class SASL:
"""The base class for all amqp SASL authentication mechanisms.
You should sub-class this if you're implementing your own authentication.
"""
@property
def mechanism(self):
"""Return a bytes containing the SASL mechanism name."""
raise NotImplementedError
def start(self, connection):
"""Return the first response to a SASL challenge as a bytes object."""
raise NotImplementedError
class PLAIN(SASL):
"""PLAIN SASL authentication mechanism.
See https://tools.ietf.org/html/rfc4616 for details
"""
mechanism = b'PLAIN'
def __init__(self, username, password):
self.username, self.password = username, password
__slots__ = (
"username",
"password",
)
def start(self, connection):
if self.username is None or self.password is None:
return NotImplemented
login_response = BytesIO()
login_response.write(b'\0')
login_response.write(self.username.encode('utf-8'))
login_response.write(b'\0')
login_response.write(self.password.encode('utf-8'))
return login_response.getvalue()
class AMQPLAIN(SASL):
"""AMQPLAIN SASL authentication mechanism.
This is a non-standard mechanism used by AMQP servers.
"""
mechanism = b'AMQPLAIN'
def __init__(self, username, password):
self.username, self.password = username, password
__slots__ = (
"username",
"password",
)
def start(self, connection):
if self.username is None or self.password is None:
return NotImplemented
login_response = BytesIO()
_write_table({b'LOGIN': self.username, b'PASSWORD': self.password},
login_response.write, [])
# Skip the length at the beginning
return login_response.getvalue()[4:]
def _get_gssapi_mechanism():
try:
import gssapi
import gssapi.raw.misc # Fail if the old python-gssapi is installed
except ImportError:
class FakeGSSAPI(SASL):
"""A no-op SASL mechanism for when gssapi isn't available."""
mechanism = None
def __init__(self, client_name=None, service=b'amqp',
rdns=False, fail_soft=False):
if not fail_soft:
raise NotImplementedError(
"You need to install the `gssapi` module for GSSAPI "
"SASL support")
def start(self): # pragma: no cover
return NotImplemented
return FakeGSSAPI
else:
class GSSAPI(SASL):
"""GSSAPI SASL authentication mechanism.
See https://tools.ietf.org/html/rfc4752 for details
"""
mechanism = b'GSSAPI'
def __init__(self, client_name=None, service=b'amqp',
rdns=False, fail_soft=False):
if client_name and not isinstance(client_name, bytes):
client_name = client_name.encode('ascii')
self.client_name = client_name
self.fail_soft = fail_soft
self.service = service
self.rdns = rdns
__slots__ = (
"client_name",
"fail_soft",
"service",
"rdns"
)
def get_hostname(self, connection):
sock = connection.transport.sock
if self.rdns and sock.family in (socket.AF_INET,
socket.AF_INET6):
peer = sock.getpeername()
hostname, _, _ = socket.gethostbyaddr(peer[0])
else:
hostname = connection.transport.host
if not isinstance(hostname, bytes):
hostname = hostname.encode('ascii')
return hostname
def start(self, connection):
try:
if self.client_name:
creds = gssapi.Credentials(
name=gssapi.Name(self.client_name))
else:
creds = None
hostname = self.get_hostname(connection)
name = gssapi.Name(b'@'.join([self.service, hostname]),
gssapi.NameType.hostbased_service)
context = gssapi.SecurityContext(name=name, creds=creds)
return context.step(None)
except gssapi.raw.misc.GSSError:
if self.fail_soft:
return NotImplemented
else:
raise
return GSSAPI
GSSAPI = _get_gssapi_mechanism()
class EXTERNAL(SASL):
"""EXTERNAL SASL mechanism.
Enables external authentication, i.e. not handled through this protocol.
Only passes 'EXTERNAL' as authentication mechanism, but no further
authentication data.
"""
mechanism = b'EXTERNAL'
def start(self, connection):
return b''
class RAW(SASL):
"""A generic custom SASL mechanism.
This mechanism takes a mechanism name and response to send to the server,
so can be used for simple custom authentication schemes.
"""
mechanism = None
def __init__(self, mechanism, response):
assert isinstance(mechanism, bytes)
assert isinstance(response, bytes)
self.mechanism, self.response = mechanism, response
warnings.warn("Passing login_method and login_response to Connection "
"is deprecated. Please implement a SASL subclass "
"instead.", DeprecationWarning)
def start(self, connection):
return self.response

View File

@@ -1,582 +0,0 @@
"""Convert between bytestreams and higher-level AMQP types.
2007-11-05 Barry Pederson <bp@barryp.org>
"""
# Copyright (C) 2007 Barry Pederson <bp@barryp.org>
import calendar
from datetime import datetime
from decimal import Decimal
from io import BytesIO
from struct import pack, unpack_from
from .exceptions import FrameSyntaxError
from .spec import Basic
from .utils import bytes_to_str as pstr_t
from .utils import str_to_bytes
ILLEGAL_TABLE_TYPE = """\
Table type {0!r} not handled by amqp.
"""
ILLEGAL_TABLE_TYPE_WITH_KEY = """\
Table type {0!r} for key {1!r} not handled by amqp. [value: {2!r}]
"""
ILLEGAL_TABLE_TYPE_WITH_VALUE = """\
Table type {0!r} not handled by amqp. [value: {1!r}]
"""
def _read_item(buf, offset):
ftype = chr(buf[offset])
offset += 1
# 'S': long string
if ftype == 'S':
slen, = unpack_from('>I', buf, offset)
offset += 4
try:
val = pstr_t(buf[offset:offset + slen])
except UnicodeDecodeError:
val = buf[offset:offset + slen]
offset += slen
# 's': short string
elif ftype == 's':
slen, = unpack_from('>B', buf, offset)
offset += 1
val = pstr_t(buf[offset:offset + slen])
offset += slen
# 'x': Bytes Array
elif ftype == 'x':
blen, = unpack_from('>I', buf, offset)
offset += 4
val = buf[offset:offset + blen]
offset += blen
# 'b': short-short int
elif ftype == 'b':
val, = unpack_from('>B', buf, offset)
offset += 1
# 'B': short-short unsigned int
elif ftype == 'B':
val, = unpack_from('>b', buf, offset)
offset += 1
# 'U': short int
elif ftype == 'U':
val, = unpack_from('>h', buf, offset)
offset += 2
# 'u': short unsigned int
elif ftype == 'u':
val, = unpack_from('>H', buf, offset)
offset += 2
# 'I': long int
elif ftype == 'I':
val, = unpack_from('>i', buf, offset)
offset += 4
# 'i': long unsigned int
elif ftype == 'i':
val, = unpack_from('>I', buf, offset)
offset += 4
# 'L': long long int
elif ftype == 'L':
val, = unpack_from('>q', buf, offset)
offset += 8
# 'l': long long unsigned int
elif ftype == 'l':
val, = unpack_from('>Q', buf, offset)
offset += 8
# 'f': float
elif ftype == 'f':
val, = unpack_from('>f', buf, offset)
offset += 4
# 'd': double
elif ftype == 'd':
val, = unpack_from('>d', buf, offset)
offset += 8
# 'D': decimal
elif ftype == 'D':
d, = unpack_from('>B', buf, offset)
offset += 1
n, = unpack_from('>i', buf, offset)
offset += 4
val = Decimal(n) / Decimal(10 ** d)
# 'F': table
elif ftype == 'F':
tlen, = unpack_from('>I', buf, offset)
offset += 4
limit = offset + tlen
val = {}
while offset < limit:
keylen, = unpack_from('>B', buf, offset)
offset += 1
key = pstr_t(buf[offset:offset + keylen])
offset += keylen
val[key], offset = _read_item(buf, offset)
# 'A': array
elif ftype == 'A':
alen, = unpack_from('>I', buf, offset)
offset += 4
limit = offset + alen
val = []
while offset < limit:
v, offset = _read_item(buf, offset)
val.append(v)
# 't' (bool)
elif ftype == 't':
val, = unpack_from('>B', buf, offset)
val = bool(val)
offset += 1
# 'T': timestamp
elif ftype == 'T':
val, = unpack_from('>Q', buf, offset)
offset += 8
val = datetime.utcfromtimestamp(val)
# 'V': void
elif ftype == 'V':
val = None
else:
raise FrameSyntaxError(
'Unknown value in table: {!r} ({!r})'.format(
ftype, type(ftype)))
return val, offset
def loads(format, buf, offset):
"""Deserialize amqp format.
bit = b
octet = o
short = B
long = l
long long = L
float = f
shortstr = s
longstr = S
table = F
array = A
timestamp = T
"""
bitcount = bits = 0
values = []
append = values.append
format = pstr_t(format)
for p in format:
if p == 'b':
if not bitcount:
bits = ord(buf[offset:offset + 1])
offset += 1
bitcount = 8
val = (bits & 1) == 1
bits >>= 1
bitcount -= 1
elif p == 'o':
bitcount = bits = 0
val, = unpack_from('>B', buf, offset)
offset += 1
elif p == 'B':
bitcount = bits = 0
val, = unpack_from('>H', buf, offset)
offset += 2
elif p == 'l':
bitcount = bits = 0
val, = unpack_from('>I', buf, offset)
offset += 4
elif p == 'L':
bitcount = bits = 0
val, = unpack_from('>Q', buf, offset)
offset += 8
elif p == 'f':
bitcount = bits = 0
val, = unpack_from('>f', buf, offset)
offset += 4
elif p == 's':
bitcount = bits = 0
slen, = unpack_from('B', buf, offset)
offset += 1
val = buf[offset:offset + slen].decode('utf-8', 'surrogatepass')
offset += slen
elif p == 'S':
bitcount = bits = 0
slen, = unpack_from('>I', buf, offset)
offset += 4
val = buf[offset:offset + slen].decode('utf-8', 'surrogatepass')
offset += slen
elif p == 'x':
blen, = unpack_from('>I', buf, offset)
offset += 4
val = buf[offset:offset + blen]
offset += blen
elif p == 'F':
bitcount = bits = 0
tlen, = unpack_from('>I', buf, offset)
offset += 4
limit = offset + tlen
val = {}
while offset < limit:
keylen, = unpack_from('>B', buf, offset)
offset += 1
key = pstr_t(buf[offset:offset + keylen])
offset += keylen
val[key], offset = _read_item(buf, offset)
elif p == 'A':
bitcount = bits = 0
alen, = unpack_from('>I', buf, offset)
offset += 4
limit = offset + alen
val = []
while offset < limit:
aval, offset = _read_item(buf, offset)
val.append(aval)
elif p == 'T':
bitcount = bits = 0
val, = unpack_from('>Q', buf, offset)
offset += 8
val = datetime.utcfromtimestamp(val)
else:
raise FrameSyntaxError(ILLEGAL_TABLE_TYPE.format(p))
append(val)
return values, offset
def _flushbits(bits, write):
if bits:
write(pack('B' * len(bits), *bits))
bits[:] = []
return 0
def dumps(format, values):
"""Serialize AMQP arguments.
Notes:
bit = b
octet = o
short = B
long = l
long long = L
shortstr = s
longstr = S
byte array = x
table = F
array = A
"""
bitcount = 0
bits = []
out = BytesIO()
write = out.write
format = pstr_t(format)
for i, val in enumerate(values):
p = format[i]
if p == 'b':
val = 1 if val else 0
shift = bitcount % 8
if shift == 0:
bits.append(0)
bits[-1] |= (val << shift)
bitcount += 1
elif p == 'o':
bitcount = _flushbits(bits, write)
write(pack('B', val))
elif p == 'B':
bitcount = _flushbits(bits, write)
write(pack('>H', int(val)))
elif p == 'l':
bitcount = _flushbits(bits, write)
write(pack('>I', val))
elif p == 'L':
bitcount = _flushbits(bits, write)
write(pack('>Q', val))
elif p == 'f':
bitcount = _flushbits(bits, write)
write(pack('>f', val))
elif p == 's':
val = val or ''
bitcount = _flushbits(bits, write)
if isinstance(val, str):
val = val.encode('utf-8', 'surrogatepass')
write(pack('B', len(val)))
write(val)
elif p == 'S' or p == 'x':
val = val or ''
bitcount = _flushbits(bits, write)
if isinstance(val, str):
val = val.encode('utf-8', 'surrogatepass')
write(pack('>I', len(val)))
write(val)
elif p == 'F':
bitcount = _flushbits(bits, write)
_write_table(val or {}, write, bits)
elif p == 'A':
bitcount = _flushbits(bits, write)
_write_array(val or [], write, bits)
elif p == 'T':
write(pack('>Q', int(calendar.timegm(val.utctimetuple()))))
_flushbits(bits, write)
return out.getvalue()
def _write_table(d, write, bits):
out = BytesIO()
twrite = out.write
for k, v in d.items():
if isinstance(k, str):
k = k.encode('utf-8', 'surrogatepass')
twrite(pack('B', len(k)))
twrite(k)
try:
_write_item(v, twrite, bits)
except ValueError:
raise FrameSyntaxError(
ILLEGAL_TABLE_TYPE_WITH_KEY.format(type(v), k, v))
table_data = out.getvalue()
write(pack('>I', len(table_data)))
write(table_data)
def _write_array(list_, write, bits):
out = BytesIO()
awrite = out.write
for v in list_:
try:
_write_item(v, awrite, bits)
except ValueError:
raise FrameSyntaxError(
ILLEGAL_TABLE_TYPE_WITH_VALUE.format(type(v), v))
array_data = out.getvalue()
write(pack('>I', len(array_data)))
write(array_data)
def _write_item(v, write, bits):
if isinstance(v, (str, bytes)):
if isinstance(v, str):
v = v.encode('utf-8', 'surrogatepass')
write(pack('>cI', b'S', len(v)))
write(v)
elif isinstance(v, bool):
write(pack('>cB', b't', int(v)))
elif isinstance(v, float):
write(pack('>cd', b'd', v))
elif isinstance(v, int):
if v > 2147483647 or v < -2147483647:
write(pack('>cq', b'L', v))
else:
write(pack('>ci', b'I', v))
elif isinstance(v, Decimal):
sign, digits, exponent = v.as_tuple()
v = 0
for d in digits:
v = (v * 10) + d
if sign:
v = -v
write(pack('>cBi', b'D', -exponent, v))
elif isinstance(v, datetime):
write(
pack('>cQ', b'T', int(calendar.timegm(v.utctimetuple()))))
elif isinstance(v, dict):
write(b'F')
_write_table(v, write, bits)
elif isinstance(v, (list, tuple)):
write(b'A')
_write_array(v, write, bits)
elif v is None:
write(b'V')
else:
raise ValueError()
def decode_properties_basic(buf, offset):
"""Decode basic properties."""
properties = {}
flags, = unpack_from('>H', buf, offset)
offset += 2
if flags & 0x8000:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['content_type'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x4000:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['content_encoding'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x2000:
_f, offset = loads('F', buf, offset)
properties['application_headers'], = _f
if flags & 0x1000:
properties['delivery_mode'], = unpack_from('>B', buf, offset)
offset += 1
if flags & 0x0800:
properties['priority'], = unpack_from('>B', buf, offset)
offset += 1
if flags & 0x0400:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['correlation_id'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0200:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['reply_to'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0100:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['expiration'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0080:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['message_id'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0040:
properties['timestamp'], = unpack_from('>Q', buf, offset)
offset += 8
if flags & 0x0020:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['type'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0010:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['user_id'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0008:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['app_id'] = pstr_t(buf[offset:offset + slen])
offset += slen
if flags & 0x0004:
slen, = unpack_from('>B', buf, offset)
offset += 1
properties['cluster_id'] = pstr_t(buf[offset:offset + slen])
offset += slen
return properties, offset
PROPERTY_CLASSES = {
Basic.CLASS_ID: decode_properties_basic,
}
class GenericContent:
"""Abstract base class for AMQP content.
Subclasses should override the PROPERTIES attribute.
"""
CLASS_ID = None
PROPERTIES = [('dummy', 's')]
def __init__(self, frame_method=None, frame_args=None, **props):
self.frame_method = frame_method
self.frame_args = frame_args
self.properties = props
self._pending_chunks = []
self.body_received = 0
self.body_size = 0
self.ready = False
__slots__ = (
"frame_method",
"frame_args",
"properties",
"_pending_chunks",
"body_received",
"body_size",
"ready",
# adding '__dict__' to get dynamic assignment
"__dict__",
"__weakref__",
)
def __getattr__(self, name):
# Look for additional properties in the 'properties'
# dictionary, and if present - the 'delivery_info' dictionary.
if name == '__setstate__':
# Allows pickling/unpickling to work
raise AttributeError('__setstate__')
if name in self.properties:
return self.properties[name]
raise AttributeError(name)
def _load_properties(self, class_id, buf, offset):
"""Load AMQP properties.
Given the raw bytes containing the property-flags and property-list
from a content-frame-header, parse and insert into a dictionary
stored in this object as an attribute named 'properties'.
"""
# Read 16-bit shorts until we get one with a low bit set to zero
props, offset = PROPERTY_CLASSES[class_id](buf, offset)
self.properties = props
return offset
def _serialize_properties(self):
"""Serialize AMQP properties.
Serialize the 'properties' attribute (a dictionary) into
the raw bytes making up a set of property flags and a
property list, suitable for putting into a content frame header.
"""
shift = 15
flag_bits = 0
flags = []
sformat, svalues = [], []
props = self.properties
for key, proptype in self.PROPERTIES:
val = props.get(key, None)
if val is not None:
if shift == 0:
flags.append(flag_bits)
flag_bits = 0
shift = 15
flag_bits |= (1 << shift)
if proptype != 'bit':
sformat.append(str_to_bytes(proptype))
svalues.append(val)
shift -= 1
flags.append(flag_bits)
result = BytesIO()
write = result.write
for flag_bits in flags:
write(pack('>H', flag_bits))
write(dumps(b''.join(sformat), svalues))
return result.getvalue()
def inbound_header(self, buf, offset=0):
class_id, self.body_size = unpack_from('>HxxQ', buf, offset)
offset += 12
self._load_properties(class_id, buf, offset)
if not self.body_size:
self.ready = True
return offset
def inbound_body(self, buf):
chunks = self._pending_chunks
self.body_received += len(buf)
if self.body_received >= self.body_size:
if chunks:
chunks.append(buf)
self.body = bytes().join(chunks)
chunks[:] = []
else:
self.body = buf
self.ready = True
else:
chunks.append(buf)

View File

@@ -1,121 +0,0 @@
"""AMQP Spec."""
from collections import namedtuple
method_t = namedtuple('method_t', ('method_sig', 'args', 'content'))
def method(method_sig, args=None, content=False):
"""Create amqp method specification tuple."""
return method_t(method_sig, args, content)
class Connection:
"""AMQ Connection class."""
CLASS_ID = 10
Start = (10, 10)
StartOk = (10, 11)
Secure = (10, 20)
SecureOk = (10, 21)
Tune = (10, 30)
TuneOk = (10, 31)
Open = (10, 40)
OpenOk = (10, 41)
Close = (10, 50)
CloseOk = (10, 51)
Blocked = (10, 60)
Unblocked = (10, 61)
class Channel:
"""AMQ Channel class."""
CLASS_ID = 20
Open = (20, 10)
OpenOk = (20, 11)
Flow = (20, 20)
FlowOk = (20, 21)
Close = (20, 40)
CloseOk = (20, 41)
class Exchange:
"""AMQ Exchange class."""
CLASS_ID = 40
Declare = (40, 10)
DeclareOk = (40, 11)
Delete = (40, 20)
DeleteOk = (40, 21)
Bind = (40, 30)
BindOk = (40, 31)
Unbind = (40, 40)
UnbindOk = (40, 51)
class Queue:
"""AMQ Queue class."""
CLASS_ID = 50
Declare = (50, 10)
DeclareOk = (50, 11)
Bind = (50, 20)
BindOk = (50, 21)
Purge = (50, 30)
PurgeOk = (50, 31)
Delete = (50, 40)
DeleteOk = (50, 41)
Unbind = (50, 50)
UnbindOk = (50, 51)
class Basic:
"""AMQ Basic class."""
CLASS_ID = 60
Qos = (60, 10)
QosOk = (60, 11)
Consume = (60, 20)
ConsumeOk = (60, 21)
Cancel = (60, 30)
CancelOk = (60, 31)
Publish = (60, 40)
Return = (60, 50)
Deliver = (60, 60)
Get = (60, 70)
GetOk = (60, 71)
GetEmpty = (60, 72)
Ack = (60, 80)
Nack = (60, 120)
Reject = (60, 90)
RecoverAsync = (60, 100)
Recover = (60, 110)
RecoverOk = (60, 111)
class Confirm:
"""AMQ Confirm class."""
CLASS_ID = 85
Select = (85, 10)
SelectOk = (85, 11)
class Tx:
"""AMQ Tx class."""
CLASS_ID = 90
Select = (90, 10)
SelectOk = (90, 11)
Commit = (90, 20)
CommitOk = (90, 21)
Rollback = (90, 30)
RollbackOk = (90, 31)

View File

@@ -1,679 +0,0 @@
"""Transport implementation."""
# Copyright (C) 2009 Barry Pederson <bp@barryp.org>
import errno
import os
import re
import socket
import ssl
from contextlib import contextmanager
from ssl import SSLError
from struct import pack, unpack
from .exceptions import UnexpectedFrame
from .platform import KNOWN_TCP_OPTS, SOL_TCP
from .utils import set_cloexec
_UNAVAIL = {errno.EAGAIN, errno.EINTR, errno.ENOENT, errno.EWOULDBLOCK}
AMQP_PORT = 5672
EMPTY_BUFFER = bytes()
SIGNED_INT_MAX = 0x7FFFFFFF
# Yes, Advanced Message Queuing Protocol Protocol is redundant
AMQP_PROTOCOL_HEADER = b'AMQP\x00\x00\x09\x01'
# Match things like: [fe80::1]:5432, from RFC 2732
IPV6_LITERAL = re.compile(r'\[([\.0-9a-f:]+)\](?::(\d+))?')
DEFAULT_SOCKET_SETTINGS = {
'TCP_NODELAY': 1,
'TCP_USER_TIMEOUT': 1000,
'TCP_KEEPIDLE': 60,
'TCP_KEEPINTVL': 10,
'TCP_KEEPCNT': 9,
}
def to_host_port(host, default=AMQP_PORT):
"""Convert hostname:port string to host, port tuple."""
port = default
m = IPV6_LITERAL.match(host)
if m:
host = m.group(1)
if m.group(2):
port = int(m.group(2))
else:
if ':' in host:
host, port = host.rsplit(':', 1)
port = int(port)
return host, port
class _AbstractTransport:
"""Common superclass for TCP and SSL transports.
PARAMETERS:
host: str
Broker address in format ``HOSTNAME:PORT``.
connect_timeout: int
Timeout of creating new connection.
read_timeout: int
sets ``SO_RCVTIMEO`` parameter of socket.
write_timeout: int
sets ``SO_SNDTIMEO`` parameter of socket.
socket_settings: dict
dictionary containing `optname` and ``optval`` passed to
``setsockopt(2)``.
raise_on_initial_eintr: bool
when True, ``socket.timeout`` is raised
when exception is received during first read. See ``_read()`` for
details.
"""
def __init__(self, host, connect_timeout=None,
read_timeout=None, write_timeout=None,
socket_settings=None, raise_on_initial_eintr=True, **kwargs):
self.connected = False
self.sock = None
self.raise_on_initial_eintr = raise_on_initial_eintr
self._read_buffer = EMPTY_BUFFER
self.host, self.port = to_host_port(host)
self.connect_timeout = connect_timeout
self.read_timeout = read_timeout
self.write_timeout = write_timeout
self.socket_settings = socket_settings
__slots__ = (
"connection",
"sock",
"raise_on_initial_eintr",
"_read_buffer",
"host",
"port",
"connect_timeout",
"read_timeout",
"write_timeout",
"socket_settings",
# adding '__dict__' to get dynamic assignment
"__dict__",
"__weakref__",
)
def __repr__(self):
if self.sock:
src = f'{self.sock.getsockname()[0]}:{self.sock.getsockname()[1]}'
try:
dst = f'{self.sock.getpeername()[0]}:{self.sock.getpeername()[1]}'
except (socket.error) as e:
dst = f'ERROR: {e}'
return f'<{type(self).__name__}: {src} -> {dst} at {id(self):#x}>'
else:
return f'<{type(self).__name__}: (disconnected) at {id(self):#x}>'
def connect(self):
try:
# are we already connected?
if self.connected:
return
self._connect(self.host, self.port, self.connect_timeout)
self._init_socket(
self.socket_settings, self.read_timeout, self.write_timeout,
)
# we've sent the banner; signal connect
# EINTR, EAGAIN, EWOULDBLOCK would signal that the banner
# has _not_ been sent
self.connected = True
except (OSError, SSLError):
# if not fully connected, close socket, and reraise error
if self.sock and not self.connected:
self.sock.close()
self.sock = None
raise
@contextmanager
def having_timeout(self, timeout):
if timeout is None:
yield self.sock
else:
sock = self.sock
prev = sock.gettimeout()
if prev != timeout:
sock.settimeout(timeout)
try:
yield self.sock
except SSLError as exc:
if 'timed out' in str(exc):
# http://bugs.python.org/issue10272
raise socket.timeout()
elif 'The operation did not complete' in str(exc):
# Non-blocking SSL sockets can throw SSLError
raise socket.timeout()
raise
except OSError as exc:
if exc.errno == errno.EWOULDBLOCK:
raise socket.timeout()
raise
finally:
if timeout != prev:
sock.settimeout(prev)
def _connect(self, host, port, timeout):
entries = socket.getaddrinfo(
host, port, socket.AF_UNSPEC, socket.SOCK_STREAM, SOL_TCP,
)
for i, res in enumerate(entries):
af, socktype, proto, canonname, sa = res
try:
self.sock = socket.socket(af, socktype, proto)
try:
set_cloexec(self.sock, True)
except NotImplementedError:
pass
self.sock.settimeout(timeout)
self.sock.connect(sa)
except socket.error:
if self.sock:
self.sock.close()
self.sock = None
if i + 1 >= len(entries):
raise
else:
break
def _init_socket(self, socket_settings, read_timeout, write_timeout):
self.sock.settimeout(None) # set socket back to blocking mode
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self._set_socket_options(socket_settings)
# set socket timeouts
for timeout, interval in ((socket.SO_SNDTIMEO, write_timeout),
(socket.SO_RCVTIMEO, read_timeout)):
if interval is not None:
sec = int(interval)
usec = int((interval - sec) * 1000000)
self.sock.setsockopt(
socket.SOL_SOCKET, timeout,
pack('ll', sec, usec),
)
self._setup_transport()
self._write(AMQP_PROTOCOL_HEADER)
def _get_tcp_socket_defaults(self, sock):
tcp_opts = {}
for opt in KNOWN_TCP_OPTS:
enum = None
if opt == 'TCP_USER_TIMEOUT':
try:
from socket import TCP_USER_TIMEOUT as enum
except ImportError:
# should be in Python 3.6+ on Linux.
enum = 18
elif hasattr(socket, opt):
enum = getattr(socket, opt)
if enum:
if opt in DEFAULT_SOCKET_SETTINGS:
tcp_opts[enum] = DEFAULT_SOCKET_SETTINGS[opt]
elif hasattr(socket, opt):
tcp_opts[enum] = sock.getsockopt(
SOL_TCP, getattr(socket, opt))
return tcp_opts
def _set_socket_options(self, socket_settings):
tcp_opts = self._get_tcp_socket_defaults(self.sock)
if socket_settings:
tcp_opts.update(socket_settings)
for opt, val in tcp_opts.items():
self.sock.setsockopt(SOL_TCP, opt, val)
def _read(self, n, initial=False):
"""Read exactly n bytes from the peer."""
raise NotImplementedError('Must be overridden in subclass')
def _setup_transport(self):
"""Do any additional initialization of the class."""
pass
def _shutdown_transport(self):
"""Do any preliminary work in shutting down the connection."""
pass
def _write(self, s):
"""Completely write a string to the peer."""
raise NotImplementedError('Must be overridden in subclass')
def close(self):
if self.sock is not None:
try:
self._shutdown_transport()
except OSError:
pass
# Call shutdown first to make sure that pending messages
# reach the AMQP broker if the program exits after
# calling this method.
try:
self.sock.shutdown(socket.SHUT_RDWR)
except OSError:
pass
try:
self.sock.close()
except OSError:
pass
self.sock = None
self.connected = False
def read_frame(self, unpack=unpack):
"""Parse AMQP frame.
Frame has following format::
0 1 3 7 size+7 size+8
+------+---------+---------+ +-------------+ +-----------+
| type | channel | size | | payload | | frame-end |
+------+---------+---------+ +-------------+ +-----------+
octet short long 'size' octets octet
"""
read = self._read
read_frame_buffer = EMPTY_BUFFER
try:
frame_header = read(7, True)
read_frame_buffer += frame_header
frame_type, channel, size = unpack('>BHI', frame_header)
# >I is an unsigned int, but the argument to sock.recv is signed,
# so we know the size can be at most 2 * SIGNED_INT_MAX
if size > SIGNED_INT_MAX:
part1 = read(SIGNED_INT_MAX)
try:
part2 = read(size - SIGNED_INT_MAX)
except (socket.timeout, OSError, SSLError):
# In case this read times out, we need to make sure to not
# lose part1 when we retry the read
read_frame_buffer += part1
raise
payload = b''.join([part1, part2])
else:
payload = read(size)
read_frame_buffer += payload
frame_end = ord(read(1))
except socket.timeout:
self._read_buffer = read_frame_buffer + self._read_buffer
raise
except (OSError, SSLError) as exc:
if (
isinstance(exc, socket.error) and os.name == 'nt'
and exc.errno == errno.EWOULDBLOCK # noqa
):
# On windows we can get a read timeout with a winsock error
# code instead of a proper socket.timeout() error, see
# https://github.com/celery/py-amqp/issues/320
self._read_buffer = read_frame_buffer + self._read_buffer
raise socket.timeout()
if isinstance(exc, SSLError) and 'timed out' in str(exc):
# Don't disconnect for ssl read time outs
# http://bugs.python.org/issue10272
self._read_buffer = read_frame_buffer + self._read_buffer
raise socket.timeout()
if exc.errno not in _UNAVAIL:
self.connected = False
raise
# frame-end octet must contain '\xce' value
if frame_end == 206:
return frame_type, channel, payload
else:
raise UnexpectedFrame(
f'Received frame_end {frame_end:#04x} while expecting 0xce')
def write(self, s):
try:
self._write(s)
except socket.timeout:
raise
except OSError as exc:
if exc.errno not in _UNAVAIL:
self.connected = False
raise
class SSLTransport(_AbstractTransport):
"""Transport that works over SSL.
PARAMETERS:
host: str
Broker address in format ``HOSTNAME:PORT``.
connect_timeout: int
Timeout of creating new connection.
ssl: bool|dict
parameters of TLS subsystem.
- when ``ssl`` is not dictionary, defaults of TLS are used
- otherwise:
- if ``ssl`` dictionary contains ``context`` key,
:attr:`~SSLTransport._wrap_context` is used for wrapping
socket. ``context`` is a dictionary passed to
:attr:`~SSLTransport._wrap_context` as context parameter.
All others items from ``ssl`` argument are passed as
``sslopts``.
- if ``ssl`` dictionary does not contain ``context`` key,
:attr:`~SSLTransport._wrap_socket_sni` is used for
wrapping socket. All items in ``ssl`` argument are
passed to :attr:`~SSLTransport._wrap_socket_sni` as
parameters.
kwargs:
additional arguments of
:class:`~amqp.transport._AbstractTransport` class
"""
def __init__(self, host, connect_timeout=None, ssl=None, **kwargs):
self.sslopts = ssl if isinstance(ssl, dict) else {}
self._read_buffer = EMPTY_BUFFER
super().__init__(
host, connect_timeout=connect_timeout, **kwargs)
__slots__ = (
"sslopts",
)
def _setup_transport(self):
"""Wrap the socket in an SSL object."""
self.sock = self._wrap_socket(self.sock, **self.sslopts)
# Explicitly set a timeout here to stop any hangs on handshake.
self.sock.settimeout(self.connect_timeout)
self.sock.do_handshake()
self._quick_recv = self.sock.read
def _wrap_socket(self, sock, context=None, **sslopts):
if context:
return self._wrap_context(sock, sslopts, **context)
return self._wrap_socket_sni(sock, **sslopts)
def _wrap_context(self, sock, sslopts, check_hostname=None, **ctx_options):
"""Wrap socket without SNI headers.
PARAMETERS:
sock: socket.socket
Socket to be wrapped.
sslopts: dict
Parameters of :attr:`ssl.SSLContext.wrap_socket`.
check_hostname
Whether to match the peer certs hostname. See
:attr:`ssl.SSLContext.check_hostname` for details.
ctx_options
Parameters of :attr:`ssl.create_default_context`.
"""
ctx = ssl.create_default_context(**ctx_options)
ctx.check_hostname = check_hostname
return ctx.wrap_socket(sock, **sslopts)
def _wrap_socket_sni(self, sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=None,
ca_certs=None, do_handshake_on_connect=False,
suppress_ragged_eofs=True, server_hostname=None,
ciphers=None, ssl_version=None):
"""Socket wrap with SNI headers.
stdlib :attr:`ssl.SSLContext.wrap_socket` method augmented with support
for setting the server_hostname field required for SNI hostname header.
PARAMETERS:
sock: socket.socket
Socket to be wrapped.
keyfile: str
Path to the private key
certfile: str
Path to the certificate
server_side: bool
Identifies whether server-side or client-side
behavior is desired from this socket. See
:attr:`~ssl.SSLContext.wrap_socket` for details.
cert_reqs: ssl.VerifyMode
When set to other than :attr:`ssl.CERT_NONE`, peers certificate
is checked. Possible values are :attr:`ssl.CERT_NONE`,
:attr:`ssl.CERT_OPTIONAL` and :attr:`ssl.CERT_REQUIRED`.
ca_certs: str
Path to “certification authority” (CA) certificates
used to validate other peers certificates when ``cert_reqs``
is other than :attr:`ssl.CERT_NONE`.
do_handshake_on_connect: bool
Specifies whether to do the SSL
handshake automatically. See
:attr:`~ssl.SSLContext.wrap_socket` for details.
suppress_ragged_eofs (bool):
See :attr:`~ssl.SSLContext.wrap_socket` for details.
server_hostname: str
Specifies the hostname of the service which
we are connecting to. See :attr:`~ssl.SSLContext.wrap_socket`
for details.
ciphers: str
Available ciphers for sockets created with this
context. See :attr:`ssl.SSLContext.set_ciphers`
ssl_version:
Protocol of the SSL Context. The value is one of
``ssl.PROTOCOL_*`` constants.
"""
opts = {
'sock': sock,
'server_side': server_side,
'do_handshake_on_connect': do_handshake_on_connect,
'suppress_ragged_eofs': suppress_ragged_eofs,
'server_hostname': server_hostname,
}
if ssl_version is None:
ssl_version = (
ssl.PROTOCOL_TLS_SERVER
if server_side
else ssl.PROTOCOL_TLS_CLIENT
)
context = ssl.SSLContext(ssl_version)
if certfile is not None:
context.load_cert_chain(certfile, keyfile)
if ca_certs is not None:
context.load_verify_locations(ca_certs)
if ciphers is not None:
context.set_ciphers(ciphers)
# Set SNI headers if supported.
# Must set context.check_hostname before setting context.verify_mode
# to avoid setting context.verify_mode=ssl.CERT_NONE while
# context.check_hostname is still True (the default value in context
# if client-side) which results in the following exception:
# ValueError: Cannot set verify_mode to CERT_NONE when check_hostname
# is enabled.
try:
context.check_hostname = (
ssl.HAS_SNI and server_hostname is not None
)
except AttributeError:
pass # ask forgiveness not permission
# See note above re: ordering for context.check_hostname and
# context.verify_mode assignments.
if cert_reqs is not None:
context.verify_mode = cert_reqs
if ca_certs is None and context.verify_mode != ssl.CERT_NONE:
purpose = (
ssl.Purpose.CLIENT_AUTH
if server_side
else ssl.Purpose.SERVER_AUTH
)
context.load_default_certs(purpose)
sock = context.wrap_socket(**opts)
return sock
def _shutdown_transport(self):
"""Unwrap a SSL socket, so we can call shutdown()."""
if self.sock is not None:
self.sock = self.sock.unwrap()
def _read(self, n, initial=False,
_errnos=(errno.ENOENT, errno.EAGAIN, errno.EINTR)):
# According to SSL_read(3), it can at most return 16kb of data.
# Thus, we use an internal read buffer like TCPTransport._read
# to get the exact number of bytes wanted.
recv = self._quick_recv
rbuf = self._read_buffer
try:
while len(rbuf) < n:
try:
s = recv(n - len(rbuf)) # see note above
except OSError as exc:
# ssl.sock.read may cause ENOENT if the
# operation couldn't be performed (Issue celery#1414).
if exc.errno in _errnos:
if initial and self.raise_on_initial_eintr:
raise socket.timeout()
continue
raise
if not s:
raise OSError('Server unexpectedly closed connection')
rbuf += s
except: # noqa
self._read_buffer = rbuf
raise
result, self._read_buffer = rbuf[:n], rbuf[n:]
return result
def _write(self, s):
"""Write a string out to the SSL socket fully."""
write = self.sock.write
while s:
try:
n = write(s)
except ValueError:
# AG: sock._sslobj might become null in the meantime if the
# remote connection has hung up.
# In python 3.4, a ValueError is raised is self._sslobj is
# None.
n = 0
if not n:
raise OSError('Socket closed')
s = s[n:]
class TCPTransport(_AbstractTransport):
"""Transport that deals directly with TCP socket.
All parameters are :class:`~amqp.transport._AbstractTransport` class.
"""
def _setup_transport(self):
# Setup to _write() directly to the socket, and
# do our own buffered reads.
self._write = self.sock.sendall
self._read_buffer = EMPTY_BUFFER
self._quick_recv = self.sock.recv
def _read(self, n, initial=False, _errnos=(errno.EAGAIN, errno.EINTR)):
"""Read exactly n bytes from the socket."""
recv = self._quick_recv
rbuf = self._read_buffer
try:
while len(rbuf) < n:
try:
s = recv(n - len(rbuf))
except OSError as exc:
if exc.errno in _errnos:
if initial and self.raise_on_initial_eintr:
raise socket.timeout()
continue
raise
if not s:
raise OSError('Server unexpectedly closed connection')
rbuf += s
except: # noqa
self._read_buffer = rbuf
raise
result, self._read_buffer = rbuf[:n], rbuf[n:]
return result
def Transport(host, connect_timeout=None, ssl=False, **kwargs):
"""Create transport.
Given a few parameters from the Connection constructor,
select and create a subclass of
:class:`~amqp.transport._AbstractTransport`.
PARAMETERS:
host: str
Broker address in format ``HOSTNAME:PORT``.
connect_timeout: int
Timeout of creating new connection.
ssl: bool|dict
If set, :class:`~amqp.transport.SSLTransport` is used
and ``ssl`` parameter is passed to it. Otherwise
:class:`~amqp.transport.TCPTransport` is used.
kwargs:
additional arguments of :class:`~amqp.transport._AbstractTransport`
class
"""
transport = SSLTransport if ssl else TCPTransport
return transport(host, connect_timeout=connect_timeout, ssl=ssl, **kwargs)

View File

@@ -1,64 +0,0 @@
"""Compatibility utilities."""
import logging
from logging import NullHandler
# enables celery 3.1.23 to start again
from vine import promise # noqa
from vine.utils import wraps
try:
import fcntl
except ImportError: # pragma: no cover
fcntl = None # noqa
def set_cloexec(fd, cloexec):
"""Set flag to close fd after exec."""
if fcntl is None:
return
try:
FD_CLOEXEC = fcntl.FD_CLOEXEC
except AttributeError:
raise NotImplementedError(
'close-on-exec flag not supported on this platform',
)
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
if cloexec:
flags |= FD_CLOEXEC
else:
flags &= ~FD_CLOEXEC
return fcntl.fcntl(fd, fcntl.F_SETFD, flags)
def coro(gen):
"""Decorator to mark generator as a co-routine."""
@wraps(gen)
def _boot(*args, **kwargs):
co = gen(*args, **kwargs)
next(co)
return co
return _boot
def str_to_bytes(s):
"""Convert str to bytes."""
if isinstance(s, str):
return s.encode('utf-8', 'surrogatepass')
return s
def bytes_to_str(s):
"""Convert bytes to str."""
if isinstance(s, bytes):
return s.decode('utf-8', 'surrogatepass')
return s
def get_logger(logger):
"""Get logger by name."""
if isinstance(logger, str):
logger = logging.getLogger(logger)
if not logger.handlers:
logger.addHandler(NullHandler())
return logger

View File

@@ -1,247 +0,0 @@
Metadata-Version: 2.4
Name: asgiref
Version: 3.10.0
Summary: ASGI specs, helper code, and adapters
Home-page: https://github.com/django/asgiref/
Author: Django Software Foundation
Author-email: foundation@djangoproject.com
License: BSD-3-Clause
Project-URL: Documentation, https://asgi.readthedocs.io/
Project-URL: Further Documentation, https://docs.djangoproject.com/en/stable/topics/async/#async-adapter-functions
Project-URL: Changelog, https://github.com/django/asgiref/blob/master/CHANGELOG.txt
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Internet :: WWW/HTTP
Requires-Python: >=3.9
License-File: LICENSE
Requires-Dist: typing_extensions>=4; python_version < "3.11"
Provides-Extra: tests
Requires-Dist: pytest; extra == "tests"
Requires-Dist: pytest-asyncio; extra == "tests"
Requires-Dist: mypy>=1.14.0; extra == "tests"
Dynamic: license-file
asgiref
=======
.. image:: https://github.com/django/asgiref/actions/workflows/tests.yml/badge.svg
:target: https://github.com/django/asgiref/actions/workflows/tests.yml
.. image:: https://img.shields.io/pypi/v/asgiref.svg
:target: https://pypi.python.org/pypi/asgiref
ASGI is a standard for Python asynchronous web apps and servers to communicate
with each other, and positioned as an asynchronous successor to WSGI. You can
read more at https://asgi.readthedocs.io/en/latest/
This package includes ASGI base libraries, such as:
* Sync-to-async and async-to-sync function wrappers, ``asgiref.sync``
* Server base classes, ``asgiref.server``
* A WSGI-to-ASGI adapter, in ``asgiref.wsgi``
Function wrappers
-----------------
These allow you to wrap or decorate async or sync functions to call them from
the other style (so you can call async functions from a synchronous thread,
or vice-versa).
In particular:
* AsyncToSync lets a synchronous subthread stop and wait while the async
function is called on the main thread's event loop, and then control is
returned to the thread when the async function is finished.
* SyncToAsync lets async code call a synchronous function, which is run in
a threadpool and control returned to the async coroutine when the synchronous
function completes.
The idea is to make it easier to call synchronous APIs from async code and
asynchronous APIs from synchronous code so it's easier to transition code from
one style to the other. In the case of Channels, we wrap the (synchronous)
Django view system with SyncToAsync to allow it to run inside the (asynchronous)
ASGI server.
Note that exactly what threads things run in is very specific, and aimed to
keep maximum compatibility with old synchronous code. See
"Synchronous code & Threads" below for a full explanation. By default,
``sync_to_async`` will run all synchronous code in the program in the same
thread for safety reasons; you can disable this for more performance with
``@sync_to_async(thread_sensitive=False)``, but make sure that your code does
not rely on anything bound to threads (like database connections) when you do.
Threadlocal replacement
-----------------------
This is a drop-in replacement for ``threading.local`` that works with both
threads and asyncio Tasks. Even better, it will proxy values through from a
task-local context to a thread-local context when you use ``sync_to_async``
to run things in a threadpool, and vice-versa for ``async_to_sync``.
If you instead want true thread- and task-safety, you can set
``thread_critical`` on the Local object to ensure this instead.
Server base classes
-------------------
Includes a ``StatelessServer`` class which provides all the hard work of
writing a stateless server (as in, does not handle direct incoming sockets
but instead consumes external streams or sockets to work out what is happening).
An example of such a server would be a chatbot server that connects out to
a central chat server and provides a "connection scope" per user chatting to
it. There's only one actual connection, but the server has to separate things
into several scopes for easier writing of the code.
You can see an example of this being used in `frequensgi <https://github.com/andrewgodwin/frequensgi>`_.
WSGI-to-ASGI adapter
--------------------
Allows you to wrap a WSGI application so it appears as a valid ASGI application.
Simply wrap it around your WSGI application like so::
asgi_application = WsgiToAsgi(wsgi_application)
The WSGI application will be run in a synchronous threadpool, and the wrapped
ASGI application will be one that accepts ``http`` class messages.
Please note that not all extended features of WSGI may be supported (such as
file handles for incoming POST bodies).
Dependencies
------------
``asgiref`` requires Python 3.9 or higher.
Contributing
------------
Please refer to the
`main Channels contributing docs <https://github.com/django/channels/blob/master/CONTRIBUTING.rst>`_.
Testing
'''''''
To run tests, make sure you have installed the ``tests`` extra with the package::
cd asgiref/
pip install -e .[tests]
pytest
Building the documentation
''''''''''''''''''''''''''
The documentation uses `Sphinx <http://www.sphinx-doc.org>`_::
cd asgiref/docs/
pip install sphinx
To build the docs, you can use the default tools::
sphinx-build -b html . _build/html # or `make html`, if you've got make set up
cd _build/html
python -m http.server
...or you can use ``sphinx-autobuild`` to run a server and rebuild/reload
your documentation changes automatically::
pip install sphinx-autobuild
sphinx-autobuild . _build/html
Releasing
'''''''''
To release, first add details to CHANGELOG.txt and update the version number in ``asgiref/__init__.py``.
Then, build and push the packages::
python -m build
twine upload dist/*
rm -r asgiref.egg-info dist
Implementation Details
----------------------
Synchronous code & threads
''''''''''''''''''''''''''
The ``asgiref.sync`` module provides two wrappers that let you go between
asynchronous and synchronous code at will, while taking care of the rough edges
for you.
Unfortunately, the rough edges are numerous, and the code has to work especially
hard to keep things in the same thread as much as possible. Notably, the
restrictions we are working with are:
* All synchronous code called through ``SyncToAsync`` and marked with
``thread_sensitive`` should run in the same thread as each other (and if the
outer layer of the program is synchronous, the main thread)
* If a thread already has a running async loop, ``AsyncToSync`` can't run things
on that loop if it's blocked on synchronous code that is above you in the
call stack.
The first compromise you get to might be that ``thread_sensitive`` code should
just run in the same thread and not spawn in a sub-thread, fulfilling the first
restriction, but that immediately runs you into the second restriction.
The only real solution is to essentially have a variant of ThreadPoolExecutor
that executes any ``thread_sensitive`` code on the outermost synchronous
thread - either the main thread, or a single spawned subthread.
This means you now have two basic states:
* If the outermost layer of your program is synchronous, then all async code
run through ``AsyncToSync`` will run in a per-call event loop in arbitrary
sub-threads, while all ``thread_sensitive`` code will run in the main thread.
* If the outermost layer of your program is asynchronous, then all async code
runs on the main thread's event loop, and all ``thread_sensitive`` synchronous
code will run in a single shared sub-thread.
Crucially, this means that in both cases there is a thread which is a shared
resource that all ``thread_sensitive`` code must run on, and there is a chance
that this thread is currently blocked on its own ``AsyncToSync`` call. Thus,
``AsyncToSync`` needs to act as an executor for thread code while it's blocking.
The ``CurrentThreadExecutor`` class provides this functionality; rather than
simply waiting on a Future, you can call its ``run_until_future`` method and
it will run submitted code until that Future is done. This means that code
inside the call can then run code on your thread.
Maintenance and Security
------------------------
To report security issues, please contact security@djangoproject.com. For GPG
signatures and more security process information, see
https://docs.djangoproject.com/en/dev/internals/security/.
To report bugs or request new features, please open a new GitHub issue.
This repository is part of the Channels project. For the shepherd and maintenance team, please see the
`main Channels readme <https://github.com/django/channels/blob/master/README.rst>`_.

View File

@@ -1,27 +0,0 @@
asgiref-3.10.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
asgiref-3.10.0.dist-info/METADATA,sha256=TlcKOCn3FwSCGD62jZkbckPRh-RjAhkCLLDnfmDZTyA,9287
asgiref-3.10.0.dist-info/RECORD,,
asgiref-3.10.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
asgiref-3.10.0.dist-info/licenses/LICENSE,sha256=uEZBXRtRTpwd_xSiLeuQbXlLxUbKYSn5UKGM0JHipmk,1552
asgiref-3.10.0.dist-info/top_level.txt,sha256=bokQjCzwwERhdBiPdvYEZa4cHxT4NCeAffQNUqJ8ssg,8
asgiref/__init__.py,sha256=iKJAvc5i0UTDDSSefTGL0Tq-kWQ4S3OJJgvyaQfQNF8,23
asgiref/__pycache__/__init__.cpython-312.pyc,,
asgiref/__pycache__/compatibility.cpython-312.pyc,,
asgiref/__pycache__/current_thread_executor.cpython-312.pyc,,
asgiref/__pycache__/local.cpython-312.pyc,,
asgiref/__pycache__/server.cpython-312.pyc,,
asgiref/__pycache__/sync.cpython-312.pyc,,
asgiref/__pycache__/testing.cpython-312.pyc,,
asgiref/__pycache__/timeout.cpython-312.pyc,,
asgiref/__pycache__/typing.cpython-312.pyc,,
asgiref/__pycache__/wsgi.cpython-312.pyc,,
asgiref/compatibility.py,sha256=DhY1SOpOvOw0Y1lSEjCqg-znRUQKecG3LTaV48MZi68,1606
asgiref/current_thread_executor.py,sha256=42CU1VODLTk-_PYise-cP1XgyAvI5Djc8f97owFzdrs,4157
asgiref/local.py,sha256=ZZeWWIXptVU4GbNApMMWQ-skuglvodcQA5WpzJDMxh4,4912
asgiref/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
asgiref/server.py,sha256=3A68169Nuh2sTY_2O5JzRd_opKObWvvrEFcrXssq3kA,6311
asgiref/sync.py,sha256=CEKxFyePiksUoA7MronOKaF6mmNQxUYZjXlfJZXEQCM,22551
asgiref/testing.py,sha256=U5wcs_-ZYTO5SIGfl80EqRAGv_T8BHrAhvAKRuuztT4,4421
asgiref/timeout.py,sha256=LtGL-xQpG8JHprdsEUCMErJ0kNWj4qwWZhEHJ3iKu4s,3627
asgiref/typing.py,sha256=Zi72AZlOyF1C7N14LLZnpAdfUH4ljoBqFdQo_bBKMq0,6290
asgiref/wsgi.py,sha256=J8OAgirfsYHZmxxqIGfFiZ43uq1qKKv2xGMkRISNIo4,6742

View File

@@ -1,5 +0,0 @@
Wheel-Version: 1.0
Generator: setuptools (80.9.0)
Root-Is-Purelib: true
Tag: py3-none-any

View File

@@ -1,27 +0,0 @@
Copyright (c) Django Software Foundation and individual contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of Django nor the names of its contributors may be used
to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1 +0,0 @@
__version__ = "3.10.0"

View File

@@ -1,48 +0,0 @@
import inspect
from .sync import iscoroutinefunction
def is_double_callable(application):
"""
Tests to see if an application is a legacy-style (double-callable) application.
"""
# Look for a hint on the object first
if getattr(application, "_asgi_single_callable", False):
return False
if getattr(application, "_asgi_double_callable", False):
return True
# Uninstanted classes are double-callable
if inspect.isclass(application):
return True
# Instanted classes depend on their __call__
if hasattr(application, "__call__"):
# We only check to see if its __call__ is a coroutine function -
# if it's not, it still might be a coroutine function itself.
if iscoroutinefunction(application.__call__):
return False
# Non-classes we just check directly
return not iscoroutinefunction(application)
def double_to_single_callable(application):
"""
Transforms a double-callable ASGI application into a single-callable one.
"""
async def new_application(scope, receive, send):
instance = application(scope)
return await instance(receive, send)
return new_application
def guarantee_single_callable(application):
"""
Takes either a single- or double-callable application and always returns it
in single-callable style. Use this to add backwards compatibility for ASGI
2.0 applications to your server/test harness/etc.
"""
if is_double_callable(application):
application = double_to_single_callable(application)
return application

View File

@@ -1,123 +0,0 @@
import sys
import threading
from collections import deque
from concurrent.futures import Executor, Future
from typing import Any, Callable, TypeVar
if sys.version_info >= (3, 10):
from typing import ParamSpec
else:
from typing_extensions import ParamSpec
_T = TypeVar("_T")
_P = ParamSpec("_P")
_R = TypeVar("_R")
class _WorkItem:
"""
Represents an item needing to be run in the executor.
Copied from ThreadPoolExecutor (but it's private, so we're not going to rely on importing it)
"""
def __init__(
self,
future: "Future[_R]",
fn: Callable[_P, _R],
*args: _P.args,
**kwargs: _P.kwargs,
):
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self) -> None:
__traceback_hide__ = True # noqa: F841
if not self.future.set_running_or_notify_cancel():
return
try:
result = self.fn(*self.args, **self.kwargs)
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None # type: ignore[assignment]
else:
self.future.set_result(result)
class CurrentThreadExecutor(Executor):
"""
An Executor that actually runs code in the thread it is instantiated in.
Passed to other threads running async code, so they can run sync code in
the thread they came from.
"""
def __init__(self, old_executor: "CurrentThreadExecutor | None") -> None:
self._work_thread = threading.current_thread()
self._work_ready = threading.Condition(threading.Lock())
self._work_items = deque[_WorkItem]() # synchronized by _work_ready
self._broken = False # synchronized by _work_ready
self._old_executor = old_executor
def run_until_future(self, future: "Future[Any]") -> None:
"""
Runs the code in the work queue until a result is available from the future.
Should be run from the thread the executor is initialised in.
"""
# Check we're in the right thread
if threading.current_thread() != self._work_thread:
raise RuntimeError(
"You cannot run CurrentThreadExecutor from a different thread"
)
def done(future: "Future[Any]") -> None:
with self._work_ready:
self._broken = True
self._work_ready.notify()
future.add_done_callback(done)
# Keep getting and running work items until the future we're waiting for
# is done and the queue is empty.
while True:
with self._work_ready:
while not self._work_items and not self._broken:
self._work_ready.wait()
if not self._work_items:
break
# Get a work item and run it
work_item = self._work_items.popleft()
work_item.run()
del work_item
def submit(
self,
fn: Callable[_P, _R],
/,
*args: _P.args,
**kwargs: _P.kwargs,
) -> "Future[_R]":
# Check they're not submitting from the same thread
if threading.current_thread() == self._work_thread:
raise RuntimeError(
"You cannot submit onto CurrentThreadExecutor from its own thread"
)
f: "Future[_R]" = Future()
work_item = _WorkItem(f, fn, *args, **kwargs)
# Walk up the CurrentThreadExecutor stack to find the closest one still
# running
executor = self
while True:
with executor._work_ready:
if not executor._broken:
# Add to work queue
executor._work_items.append(work_item)
executor._work_ready.notify()
break
if executor._old_executor is None:
raise RuntimeError("CurrentThreadExecutor already quit or is broken")
executor = executor._old_executor
# Return the future
return f

View File

@@ -1,131 +0,0 @@
import asyncio
import contextlib
import contextvars
import threading
from typing import Any, Dict, Union
class _CVar:
"""Storage utility for Local."""
def __init__(self) -> None:
self._data: "contextvars.ContextVar[Dict[str, Any]]" = contextvars.ContextVar(
"asgiref.local"
)
def __getattr__(self, key):
storage_object = self._data.get({})
try:
return storage_object[key]
except KeyError:
raise AttributeError(f"{self!r} object has no attribute {key!r}")
def __setattr__(self, key: str, value: Any) -> None:
if key == "_data":
return super().__setattr__(key, value)
storage_object = self._data.get({}).copy()
storage_object[key] = value
self._data.set(storage_object)
def __delattr__(self, key: str) -> None:
storage_object = self._data.get({}).copy()
if key in storage_object:
del storage_object[key]
self._data.set(storage_object)
else:
raise AttributeError(f"{self!r} object has no attribute {key!r}")
class Local:
"""Local storage for async tasks.
This is a namespace object (similar to `threading.local`) where data is
also local to the current async task (if there is one).
In async threads, local means in the same sense as the `contextvars`
module - i.e. a value set in an async frame will be visible:
- to other async code `await`-ed from this frame.
- to tasks spawned using `asyncio` utilities (`create_task`, `wait_for`,
`gather` and probably others).
- to code scheduled in a sync thread using `sync_to_async`
In "sync" threads (a thread with no async event loop running), the
data is thread-local, but additionally shared with async code executed
via the `async_to_sync` utility, which schedules async code in a new thread
and copies context across to that thread.
If `thread_critical` is True, then the local will only be visible per-thread,
behaving exactly like `threading.local` if the thread is sync, and as
`contextvars` if the thread is async. This allows genuinely thread-sensitive
code (such as DB handles) to be kept stricly to their initial thread and
disable the sharing across `sync_to_async` and `async_to_sync` wrapped calls.
Unlike plain `contextvars` objects, this utility is threadsafe.
"""
def __init__(self, thread_critical: bool = False) -> None:
self._thread_critical = thread_critical
self._thread_lock = threading.RLock()
self._storage: "Union[threading.local, _CVar]"
if thread_critical:
# Thread-local storage
self._storage = threading.local()
else:
# Contextvar storage
self._storage = _CVar()
@contextlib.contextmanager
def _lock_storage(self):
# Thread safe access to storage
if self._thread_critical:
is_async = True
try:
# this is a test for are we in a async or sync
# thread - will raise RuntimeError if there is
# no current loop
asyncio.get_running_loop()
except RuntimeError:
is_async = False
if not is_async:
# We are in a sync thread, the storage is
# just the plain thread local (i.e, "global within
# this thread" - it doesn't matter where you are
# in a call stack you see the same storage)
yield self._storage
else:
# We are in an async thread - storage is still
# local to this thread, but additionally should
# behave like a context var (is only visible with
# the same async call stack)
# Ensure context exists in the current thread
if not hasattr(self._storage, "cvar"):
self._storage.cvar = _CVar()
# self._storage is a thread local, so the members
# can't be accessed in another thread (we don't
# need any locks)
yield self._storage.cvar
else:
# Lock for thread_critical=False as other threads
# can access the exact same storage object
with self._thread_lock:
yield self._storage
def __getattr__(self, key):
with self._lock_storage() as storage:
return getattr(storage, key)
def __setattr__(self, key, value):
if key in ("_local", "_storage", "_thread_critical", "_thread_lock"):
return super().__setattr__(key, value)
with self._lock_storage() as storage:
setattr(storage, key, value)
def __delattr__(self, key):
with self._lock_storage() as storage:
delattr(storage, key)

View File

@@ -1,173 +0,0 @@
import asyncio
import logging
import time
import traceback
from .compatibility import guarantee_single_callable
logger = logging.getLogger(__name__)
class StatelessServer:
"""
Base server class that handles basic concepts like application instance
creation/pooling, exception handling, and similar, for stateless protocols
(i.e. ones without actual incoming connections to the process)
Your code should override the handle() method, doing whatever it needs to,
and calling get_or_create_application_instance with a unique `scope_id`
and `scope` for the scope it wants to get.
If an application instance is found with the same `scope_id`, you are
given its input queue, otherwise one is made for you with the scope provided
and you are given that fresh new input queue. Either way, you should do
something like:
input_queue = self.get_or_create_application_instance(
"user-123456",
{"type": "testprotocol", "user_id": "123456", "username": "andrew"},
)
input_queue.put_nowait(message)
If you try and create an application instance and there are already
`max_application` instances, the oldest/least recently used one will be
reclaimed and shut down to make space.
Application coroutines that error will be found periodically (every 100ms
by default) and have their exceptions printed to the console. Override
application_exception() if you want to do more when this happens.
If you override run(), make sure you handle things like launching the
application checker.
"""
application_checker_interval = 0.1
def __init__(self, application, max_applications=1000):
# Parameters
self.application = application
self.max_applications = max_applications
# Initialisation
self.application_instances = {}
### Mainloop and handling
def run(self):
"""
Runs the asyncio event loop with our handler loop.
"""
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(self.arun())
except KeyboardInterrupt:
logger.info("Exiting due to Ctrl-C/interrupt")
async def arun(self):
"""
Runs the asyncio event loop with our handler loop.
"""
class Done(Exception):
pass
async def handle():
await self.handle()
raise Done
try:
await asyncio.gather(self.application_checker(), handle())
except Done:
pass
async def handle(self):
raise NotImplementedError("You must implement handle()")
async def application_send(self, scope, message):
"""
Receives outbound sends from applications and handles them.
"""
raise NotImplementedError("You must implement application_send()")
### Application instance management
def get_or_create_application_instance(self, scope_id, scope):
"""
Creates an application instance and returns its queue.
"""
if scope_id in self.application_instances:
self.application_instances[scope_id]["last_used"] = time.time()
return self.application_instances[scope_id]["input_queue"]
# See if we need to delete an old one
while len(self.application_instances) > self.max_applications:
self.delete_oldest_application_instance()
# Make an instance of the application
input_queue = asyncio.Queue()
application_instance = guarantee_single_callable(self.application)
# Run it, and stash the future for later checking
future = asyncio.ensure_future(
application_instance(
scope=scope,
receive=input_queue.get,
send=lambda message: self.application_send(scope, message),
),
)
self.application_instances[scope_id] = {
"input_queue": input_queue,
"future": future,
"scope": scope,
"last_used": time.time(),
}
return input_queue
def delete_oldest_application_instance(self):
"""
Finds and deletes the oldest application instance
"""
oldest_time = min(
details["last_used"] for details in self.application_instances.values()
)
for scope_id, details in self.application_instances.items():
if details["last_used"] == oldest_time:
self.delete_application_instance(scope_id)
# Return to make sure we only delete one in case two have
# the same oldest time
return
def delete_application_instance(self, scope_id):
"""
Removes an application instance (makes sure its task is stopped,
then removes it from the current set)
"""
details = self.application_instances[scope_id]
del self.application_instances[scope_id]
if not details["future"].done():
details["future"].cancel()
async def application_checker(self):
"""
Goes through the set of current application instance Futures and cleans up
any that are done/prints exceptions for any that errored.
"""
while True:
await asyncio.sleep(self.application_checker_interval)
for scope_id, details in list(self.application_instances.items()):
if details["future"].done():
exception = details["future"].exception()
if exception:
await self.application_exception(exception, details)
try:
del self.application_instances[scope_id]
except KeyError:
# Exception handling might have already got here before us. That's fine.
pass
async def application_exception(self, exception, application_details):
"""
Called whenever an application coroutine has an exception.
"""
logging.error(
"Exception inside application: %s\n%s%s",
exception,
"".join(traceback.format_tb(exception.__traceback__)),
f" {exception}",
)

View File

@@ -1,647 +0,0 @@
import asyncio
import asyncio.coroutines
import contextvars
import functools
import inspect
import os
import sys
import threading
import warnings
import weakref
from concurrent.futures import Future, ThreadPoolExecutor
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Coroutine,
Dict,
Generic,
List,
Optional,
TypeVar,
Union,
overload,
)
from .current_thread_executor import CurrentThreadExecutor
from .local import Local
if sys.version_info >= (3, 10):
from typing import ParamSpec
else:
from typing_extensions import ParamSpec
if TYPE_CHECKING:
# This is not available to import at runtime
from _typeshed import OptExcInfo
_F = TypeVar("_F", bound=Callable[..., Any])
_P = ParamSpec("_P")
_R = TypeVar("_R")
def _restore_context(context: contextvars.Context) -> None:
# Check for changes in contextvars, and set them to the current
# context for downstream consumers
for cvar in context:
cvalue = context.get(cvar)
try:
if cvar.get() != cvalue:
cvar.set(cvalue)
except LookupError:
cvar.set(cvalue)
# Python 3.12 deprecates asyncio.iscoroutinefunction() as an alias for
# inspect.iscoroutinefunction(), whilst also removing the _is_coroutine marker.
# The latter is replaced with the inspect.markcoroutinefunction decorator.
# Until 3.12 is the minimum supported Python version, provide a shim.
if hasattr(inspect, "markcoroutinefunction"):
iscoroutinefunction = inspect.iscoroutinefunction
markcoroutinefunction: Callable[[_F], _F] = inspect.markcoroutinefunction
else:
iscoroutinefunction = asyncio.iscoroutinefunction # type: ignore[assignment]
def markcoroutinefunction(func: _F) -> _F:
func._is_coroutine = asyncio.coroutines._is_coroutine # type: ignore
return func
class AsyncSingleThreadContext:
"""Context manager to run async code inside the same thread.
Normally, AsyncToSync functions run either inside a separate ThreadPoolExecutor or
the main event loop if it exists. This context manager ensures that all AsyncToSync
functions execute within the same thread.
This context manager is re-entrant, so only the outer-most call to
AsyncSingleThreadContext will set the context.
Usage:
>>> import asyncio
>>> with AsyncSingleThreadContext():
... async_to_sync(asyncio.sleep(1))()
"""
def __init__(self):
self.token = None
def __enter__(self):
try:
AsyncToSync.async_single_thread_context.get()
except LookupError:
self.token = AsyncToSync.async_single_thread_context.set(self)
return self
def __exit__(self, exc, value, tb):
if not self.token:
return
executor = AsyncToSync.context_to_thread_executor.pop(self, None)
if executor:
executor.shutdown()
AsyncToSync.async_single_thread_context.reset(self.token)
class ThreadSensitiveContext:
"""Async context manager to manage context for thread sensitive mode
This context manager controls which thread pool executor is used when in
thread sensitive mode. By default, a single thread pool executor is shared
within a process.
The ThreadSensitiveContext() context manager may be used to specify a
thread pool per context.
This context manager is re-entrant, so only the outer-most call to
ThreadSensitiveContext will set the context.
Usage:
>>> import time
>>> async with ThreadSensitiveContext():
... await sync_to_async(time.sleep, 1)()
"""
def __init__(self):
self.token = None
async def __aenter__(self):
try:
SyncToAsync.thread_sensitive_context.get()
except LookupError:
self.token = SyncToAsync.thread_sensitive_context.set(self)
return self
async def __aexit__(self, exc, value, tb):
if not self.token:
return
executor = SyncToAsync.context_to_thread_executor.pop(self, None)
if executor:
executor.shutdown()
SyncToAsync.thread_sensitive_context.reset(self.token)
class AsyncToSync(Generic[_P, _R]):
"""
Utility class which turns an awaitable that only works on the thread with
the event loop into a synchronous callable that works in a subthread.
If the call stack contains an async loop, the code runs there.
Otherwise, the code runs in a new loop in a new thread.
Either way, this thread then pauses and waits to run any thread_sensitive
code called from further down the call stack using SyncToAsync, before
finally exiting once the async task returns.
"""
# Keeps a reference to the CurrentThreadExecutor in local context, so that
# any sync_to_async inside the wrapped code can find it.
executors: "Local" = Local()
# When we can't find a CurrentThreadExecutor from the context, such as
# inside create_task, we'll look it up here from the running event loop.
loop_thread_executors: "Dict[asyncio.AbstractEventLoop, CurrentThreadExecutor]" = {}
async_single_thread_context: "contextvars.ContextVar[AsyncSingleThreadContext]" = (
contextvars.ContextVar("async_single_thread_context")
)
context_to_thread_executor: "weakref.WeakKeyDictionary[AsyncSingleThreadContext, ThreadPoolExecutor]" = (
weakref.WeakKeyDictionary()
)
def __init__(
self,
awaitable: Union[
Callable[_P, Coroutine[Any, Any, _R]],
Callable[_P, Awaitable[_R]],
],
force_new_loop: bool = False,
):
if not callable(awaitable) or (
not iscoroutinefunction(awaitable)
and not iscoroutinefunction(getattr(awaitable, "__call__", awaitable))
):
# Python does not have very reliable detection of async functions
# (lots of false negatives) so this is just a warning.
warnings.warn(
"async_to_sync was passed a non-async-marked callable", stacklevel=2
)
self.awaitable = awaitable
try:
self.__self__ = self.awaitable.__self__ # type: ignore[union-attr]
except AttributeError:
pass
self.force_new_loop = force_new_loop
self.main_event_loop = None
try:
self.main_event_loop = asyncio.get_running_loop()
except RuntimeError:
# There's no event loop in this thread.
pass
def __call__(self, *args: _P.args, **kwargs: _P.kwargs) -> _R:
__traceback_hide__ = True # noqa: F841
if not self.force_new_loop and not self.main_event_loop:
# There's no event loop in this thread. Look for the threadlocal if
# we're inside SyncToAsync
main_event_loop_pid = getattr(
SyncToAsync.threadlocal, "main_event_loop_pid", None
)
# We make sure the parent loop is from the same process - if
# they've forked, this is not going to be valid any more (#194)
if main_event_loop_pid and main_event_loop_pid == os.getpid():
self.main_event_loop = getattr(
SyncToAsync.threadlocal, "main_event_loop", None
)
# You can't call AsyncToSync from a thread with a running event loop
try:
asyncio.get_running_loop()
except RuntimeError:
pass
else:
raise RuntimeError(
"You cannot use AsyncToSync in the same thread as an async event loop - "
"just await the async function directly."
)
# Make a future for the return information
call_result: "Future[_R]" = Future()
# Make a CurrentThreadExecutor we'll use to idle in this thread - we
# need one for every sync frame, even if there's one above us in the
# same thread.
old_executor = getattr(self.executors, "current", None)
current_executor = CurrentThreadExecutor(old_executor)
self.executors.current = current_executor
# Wrapping context in list so it can be reassigned from within
# `main_wrap`.
context = [contextvars.copy_context()]
# Get task context so that parent task knows which task to propagate
# an asyncio.CancelledError to.
task_context = getattr(SyncToAsync.threadlocal, "task_context", None)
# Use call_soon_threadsafe to schedule a synchronous callback on the
# main event loop's thread if it's there, otherwise make a new loop
# in this thread.
try:
awaitable = self.main_wrap(
call_result,
sys.exc_info(),
task_context,
context,
# prepare an awaitable which can be passed as is to self.main_wrap,
# so that `args` and `kwargs` don't need to be
# destructured when passed to self.main_wrap
# (which is required by `ParamSpec`)
# as that may cause overlapping arguments
self.awaitable(*args, **kwargs),
)
async def new_loop_wrap() -> None:
loop = asyncio.get_running_loop()
self.loop_thread_executors[loop] = current_executor
try:
await awaitable
finally:
del self.loop_thread_executors[loop]
if self.main_event_loop is not None:
try:
self.main_event_loop.call_soon_threadsafe(
self.main_event_loop.create_task, awaitable
)
except RuntimeError:
running_in_main_event_loop = False
else:
running_in_main_event_loop = True
# Run the CurrentThreadExecutor until the future is done.
current_executor.run_until_future(call_result)
else:
running_in_main_event_loop = False
if not running_in_main_event_loop:
loop_executor = None
if self.async_single_thread_context.get(None):
single_thread_context = self.async_single_thread_context.get()
if single_thread_context in self.context_to_thread_executor:
loop_executor = self.context_to_thread_executor[
single_thread_context
]
else:
loop_executor = ThreadPoolExecutor(max_workers=1)
self.context_to_thread_executor[
single_thread_context
] = loop_executor
else:
# Make our own event loop - in a new thread - and run inside that.
loop_executor = ThreadPoolExecutor(max_workers=1)
loop_future = loop_executor.submit(asyncio.run, new_loop_wrap())
# Run the CurrentThreadExecutor until the future is done.
current_executor.run_until_future(loop_future)
# Wait for future and/or allow for exception propagation
loop_future.result()
finally:
_restore_context(context[0])
# Restore old current thread executor state
self.executors.current = old_executor
# Wait for results from the future.
return call_result.result()
def __get__(self, parent: Any, objtype: Any) -> Callable[_P, _R]:
"""
Include self for methods
"""
func = functools.partial(self.__call__, parent)
return functools.update_wrapper(func, self.awaitable)
async def main_wrap(
self,
call_result: "Future[_R]",
exc_info: "OptExcInfo",
task_context: "Optional[List[asyncio.Task[Any]]]",
context: List[contextvars.Context],
awaitable: Union[Coroutine[Any, Any, _R], Awaitable[_R]],
) -> None:
"""
Wraps the awaitable with something that puts the result into the
result/exception future.
"""
__traceback_hide__ = True # noqa: F841
if context is not None:
_restore_context(context[0])
current_task = asyncio.current_task()
if current_task is not None and task_context is not None:
task_context.append(current_task)
try:
# If we have an exception, run the function inside the except block
# after raising it so exc_info is correctly populated.
if exc_info[1]:
try:
raise exc_info[1]
except BaseException:
result = await awaitable
else:
result = await awaitable
except BaseException as e:
call_result.set_exception(e)
else:
call_result.set_result(result)
finally:
if current_task is not None and task_context is not None:
task_context.remove(current_task)
context[0] = contextvars.copy_context()
class SyncToAsync(Generic[_P, _R]):
"""
Utility class which turns a synchronous callable into an awaitable that
runs in a threadpool. It also sets a threadlocal inside the thread so
calls to AsyncToSync can escape it.
If thread_sensitive is passed, the code will run in the same thread as any
outer code. This is needed for underlying Python code that is not
threadsafe (for example, code which handles SQLite database connections).
If the outermost program is async (i.e. SyncToAsync is outermost), then
this will be a dedicated single sub-thread that all sync code runs in,
one after the other. If the outermost program is sync (i.e. AsyncToSync is
outermost), this will just be the main thread. This is achieved by idling
with a CurrentThreadExecutor while AsyncToSync is blocking its sync parent,
rather than just blocking.
If executor is passed in, that will be used instead of the loop's default executor.
In order to pass in an executor, thread_sensitive must be set to False, otherwise
a TypeError will be raised.
"""
# Storage for main event loop references
threadlocal = threading.local()
# Single-thread executor for thread-sensitive code
single_thread_executor = ThreadPoolExecutor(max_workers=1)
# Maintain a contextvar for the current execution context. Optionally used
# for thread sensitive mode.
thread_sensitive_context: "contextvars.ContextVar[ThreadSensitiveContext]" = (
contextvars.ContextVar("thread_sensitive_context")
)
# Contextvar that is used to detect if the single thread executor
# would be awaited on while already being used in the same context
deadlock_context: "contextvars.ContextVar[bool]" = contextvars.ContextVar(
"deadlock_context"
)
# Maintaining a weak reference to the context ensures that thread pools are
# erased once the context goes out of scope. This terminates the thread pool.
context_to_thread_executor: "weakref.WeakKeyDictionary[ThreadSensitiveContext, ThreadPoolExecutor]" = (
weakref.WeakKeyDictionary()
)
def __init__(
self,
func: Callable[_P, _R],
thread_sensitive: bool = True,
executor: Optional["ThreadPoolExecutor"] = None,
) -> None:
if (
not callable(func)
or iscoroutinefunction(func)
or iscoroutinefunction(getattr(func, "__call__", func))
):
raise TypeError("sync_to_async can only be applied to sync functions.")
self.func = func
functools.update_wrapper(self, func)
self._thread_sensitive = thread_sensitive
markcoroutinefunction(self)
if thread_sensitive and executor is not None:
raise TypeError("executor must not be set when thread_sensitive is True")
self._executor = executor
try:
self.__self__ = func.__self__ # type: ignore
except AttributeError:
pass
async def __call__(self, *args: _P.args, **kwargs: _P.kwargs) -> _R:
__traceback_hide__ = True # noqa: F841
loop = asyncio.get_running_loop()
# Work out what thread to run the code in
if self._thread_sensitive:
current_thread_executor = getattr(AsyncToSync.executors, "current", None)
if current_thread_executor:
# If we have a parent sync thread above somewhere, use that
executor = current_thread_executor
elif self.thread_sensitive_context.get(None):
# If we have a way of retrieving the current context, attempt
# to use a per-context thread pool executor
thread_sensitive_context = self.thread_sensitive_context.get()
if thread_sensitive_context in self.context_to_thread_executor:
# Re-use thread executor in current context
executor = self.context_to_thread_executor[thread_sensitive_context]
else:
# Create new thread executor in current context
executor = ThreadPoolExecutor(max_workers=1)
self.context_to_thread_executor[thread_sensitive_context] = executor
elif loop in AsyncToSync.loop_thread_executors:
# Re-use thread executor for running loop
executor = AsyncToSync.loop_thread_executors[loop]
elif self.deadlock_context.get(False):
raise RuntimeError(
"Single thread executor already being used, would deadlock"
)
else:
# Otherwise, we run it in a fixed single thread
executor = self.single_thread_executor
self.deadlock_context.set(True)
else:
# Use the passed in executor, or the loop's default if it is None
executor = self._executor
context = contextvars.copy_context()
child = functools.partial(self.func, *args, **kwargs)
func = context.run
task_context: List[asyncio.Task[Any]] = []
# Run the code in the right thread
exec_coro = loop.run_in_executor(
executor,
functools.partial(
self.thread_handler,
loop,
sys.exc_info(),
task_context,
func,
child,
),
)
ret: _R
try:
ret = await asyncio.shield(exec_coro)
except asyncio.CancelledError:
cancel_parent = True
try:
task = task_context[0]
task.cancel()
try:
await task
cancel_parent = False
except asyncio.CancelledError:
pass
except IndexError:
pass
if exec_coro.done():
raise
if cancel_parent:
exec_coro.cancel()
ret = await exec_coro
finally:
_restore_context(context)
self.deadlock_context.set(False)
return ret
def __get__(
self, parent: Any, objtype: Any
) -> Callable[_P, Coroutine[Any, Any, _R]]:
"""
Include self for methods
"""
func = functools.partial(self.__call__, parent)
return functools.update_wrapper(func, self.func)
def thread_handler(self, loop, exc_info, task_context, func, *args, **kwargs):
"""
Wraps the sync application with exception handling.
"""
__traceback_hide__ = True # noqa: F841
# Set the threadlocal for AsyncToSync
self.threadlocal.main_event_loop = loop
self.threadlocal.main_event_loop_pid = os.getpid()
self.threadlocal.task_context = task_context
# Run the function
# If we have an exception, run the function inside the except block
# after raising it so exc_info is correctly populated.
if exc_info[1]:
try:
raise exc_info[1]
except BaseException:
return func(*args, **kwargs)
else:
return func(*args, **kwargs)
@overload
def async_to_sync(
*,
force_new_loop: bool = False,
) -> Callable[
[Union[Callable[_P, Coroutine[Any, Any, _R]], Callable[_P, Awaitable[_R]]]],
Callable[_P, _R],
]:
...
@overload
def async_to_sync(
awaitable: Union[
Callable[_P, Coroutine[Any, Any, _R]],
Callable[_P, Awaitable[_R]],
],
*,
force_new_loop: bool = False,
) -> Callable[_P, _R]:
...
def async_to_sync(
awaitable: Optional[
Union[
Callable[_P, Coroutine[Any, Any, _R]],
Callable[_P, Awaitable[_R]],
]
] = None,
*,
force_new_loop: bool = False,
) -> Union[
Callable[
[Union[Callable[_P, Coroutine[Any, Any, _R]], Callable[_P, Awaitable[_R]]]],
Callable[_P, _R],
],
Callable[_P, _R],
]:
if awaitable is None:
return lambda f: AsyncToSync(
f,
force_new_loop=force_new_loop,
)
return AsyncToSync(
awaitable,
force_new_loop=force_new_loop,
)
@overload
def sync_to_async(
*,
thread_sensitive: bool = True,
executor: Optional["ThreadPoolExecutor"] = None,
) -> Callable[[Callable[_P, _R]], Callable[_P, Coroutine[Any, Any, _R]]]:
...
@overload
def sync_to_async(
func: Callable[_P, _R],
*,
thread_sensitive: bool = True,
executor: Optional["ThreadPoolExecutor"] = None,
) -> Callable[_P, Coroutine[Any, Any, _R]]:
...
def sync_to_async(
func: Optional[Callable[_P, _R]] = None,
*,
thread_sensitive: bool = True,
executor: Optional["ThreadPoolExecutor"] = None,
) -> Union[
Callable[[Callable[_P, _R]], Callable[_P, Coroutine[Any, Any, _R]]],
Callable[_P, Coroutine[Any, Any, _R]],
]:
if func is None:
return lambda f: SyncToAsync(
f,
thread_sensitive=thread_sensitive,
executor=executor,
)
return SyncToAsync(
func,
thread_sensitive=thread_sensitive,
executor=executor,
)

View File

@@ -1,137 +0,0 @@
import asyncio
import contextvars
import time
from .compatibility import guarantee_single_callable
from .timeout import timeout as async_timeout
class ApplicationCommunicator:
"""
Runs an ASGI application in a test mode, allowing sending of
messages to it and retrieval of messages it sends.
"""
def __init__(self, application, scope):
self._future = None
self.application = guarantee_single_callable(application)
self.scope = scope
self._input_queue = None
self._output_queue = None
# For Python 3.9 we need to lazily bind the queues, on 3.10+ they bind the
# event loop lazily.
@property
def input_queue(self):
if self._input_queue is None:
self._input_queue = asyncio.Queue()
return self._input_queue
@property
def output_queue(self):
if self._output_queue is None:
self._output_queue = asyncio.Queue()
return self._output_queue
@property
def future(self):
if self._future is None:
# Clear context - this ensures that context vars set in the testing scope
# are not "leaked" into the application which would normally begin with
# an empty context. In Python >= 3.11 this could also be written as:
# asyncio.create_task(..., context=contextvars.Context())
self._future = contextvars.Context().run(
asyncio.create_task,
self.application(
self.scope, self.input_queue.get, self.output_queue.put
),
)
return self._future
async def wait(self, timeout=1):
"""
Waits for the application to stop itself and returns any exceptions.
"""
try:
async with async_timeout(timeout):
try:
await self.future
self.future.result()
except asyncio.CancelledError:
pass
finally:
if not self.future.done():
self.future.cancel()
try:
await self.future
except asyncio.CancelledError:
pass
def stop(self, exceptions=True):
future = self._future
if future is None:
return
if not future.done():
future.cancel()
elif exceptions:
# Give a chance to raise any exceptions
future.result()
def __del__(self):
# Clean up on deletion
try:
self.stop(exceptions=False)
except RuntimeError:
# Event loop already stopped
pass
async def send_input(self, message):
"""
Sends a single message to the application
"""
# Make sure there's not an exception to raise from the task
if self.future.done():
self.future.result()
# Give it the message
await self.input_queue.put(message)
async def receive_output(self, timeout=1):
"""
Receives a single message from the application, with optional timeout.
"""
# Make sure there's not an exception to raise from the task
if self.future.done():
self.future.result()
# Wait and receive the message
try:
async with async_timeout(timeout):
return await self.output_queue.get()
except asyncio.TimeoutError as e:
# See if we have another error to raise inside
if self.future.done():
self.future.result()
else:
self.future.cancel()
try:
await self.future
except asyncio.CancelledError:
pass
raise e
async def receive_nothing(self, timeout=0.1, interval=0.01):
"""
Checks that there is no message to receive in the given time.
"""
# Make sure there's not an exception to raise from the task
if self.future.done():
self.future.result()
# `interval` has precedence over `timeout`
start = time.monotonic()
while time.monotonic() - start < timeout:
if not self.output_queue.empty():
return False
await asyncio.sleep(interval)
return self.output_queue.empty()

View File

@@ -1,118 +0,0 @@
# This code is originally sourced from the aio-libs project "async_timeout",
# under the Apache 2.0 license. You may see the original project at
# https://github.com/aio-libs/async-timeout
# It is vendored here to reduce chain-dependencies on this library, and
# modified slightly to remove some features we don't use.
import asyncio
import warnings
from types import TracebackType
from typing import Any # noqa
from typing import Optional, Type
class timeout:
"""timeout context manager.
Useful in cases when you want to apply timeout logic around block
of code or in cases when asyncio.wait_for is not suitable. For example:
>>> with timeout(0.001):
... async with aiohttp.get('https://github.com') as r:
... await r.text()
timeout - value in seconds or None to disable timeout logic
loop - asyncio compatible event loop
"""
def __init__(
self,
timeout: Optional[float],
*,
loop: Optional[asyncio.AbstractEventLoop] = None,
) -> None:
self._timeout = timeout
if loop is None:
loop = asyncio.get_running_loop()
else:
warnings.warn(
"""The loop argument to timeout() is deprecated.""", DeprecationWarning
)
self._loop = loop
self._task = None # type: Optional[asyncio.Task[Any]]
self._cancelled = False
self._cancel_handler = None # type: Optional[asyncio.Handle]
self._cancel_at = None # type: Optional[float]
def __enter__(self) -> "timeout":
return self._do_enter()
def __exit__(
self,
exc_type: Type[BaseException],
exc_val: BaseException,
exc_tb: TracebackType,
) -> Optional[bool]:
self._do_exit(exc_type)
return None
async def __aenter__(self) -> "timeout":
return self._do_enter()
async def __aexit__(
self,
exc_type: Type[BaseException],
exc_val: BaseException,
exc_tb: TracebackType,
) -> None:
self._do_exit(exc_type)
@property
def expired(self) -> bool:
return self._cancelled
@property
def remaining(self) -> Optional[float]:
if self._cancel_at is not None:
return max(self._cancel_at - self._loop.time(), 0.0)
else:
return None
def _do_enter(self) -> "timeout":
# Support Tornado 5- without timeout
# Details: https://github.com/python/asyncio/issues/392
if self._timeout is None:
return self
self._task = asyncio.current_task(self._loop)
if self._task is None:
raise RuntimeError(
"Timeout context manager should be used " "inside a task"
)
if self._timeout <= 0:
self._loop.call_soon(self._cancel_task)
return self
self._cancel_at = self._loop.time() + self._timeout
self._cancel_handler = self._loop.call_at(self._cancel_at, self._cancel_task)
return self
def _do_exit(self, exc_type: Type[BaseException]) -> None:
if exc_type is asyncio.CancelledError and self._cancelled:
self._cancel_handler = None
self._task = None
raise asyncio.TimeoutError
if self._timeout is not None and self._cancel_handler is not None:
self._cancel_handler.cancel()
self._cancel_handler = None
self._task = None
return None
def _cancel_task(self) -> None:
if self._task is not None:
self._task.cancel()
self._cancelled = True

View File

@@ -1,279 +0,0 @@
import sys
from typing import (
Any,
Awaitable,
Callable,
Dict,
Iterable,
Literal,
Optional,
Protocol,
Tuple,
Type,
TypedDict,
Union,
)
if sys.version_info >= (3, 11):
from typing import NotRequired
else:
from typing_extensions import NotRequired
__all__ = (
"ASGIVersions",
"HTTPScope",
"WebSocketScope",
"LifespanScope",
"WWWScope",
"Scope",
"HTTPRequestEvent",
"HTTPResponseStartEvent",
"HTTPResponseBodyEvent",
"HTTPResponseTrailersEvent",
"HTTPResponsePathsendEvent",
"HTTPServerPushEvent",
"HTTPDisconnectEvent",
"WebSocketConnectEvent",
"WebSocketAcceptEvent",
"WebSocketReceiveEvent",
"WebSocketSendEvent",
"WebSocketResponseStartEvent",
"WebSocketResponseBodyEvent",
"WebSocketDisconnectEvent",
"WebSocketCloseEvent",
"LifespanStartupEvent",
"LifespanShutdownEvent",
"LifespanStartupCompleteEvent",
"LifespanStartupFailedEvent",
"LifespanShutdownCompleteEvent",
"LifespanShutdownFailedEvent",
"ASGIReceiveEvent",
"ASGISendEvent",
"ASGIReceiveCallable",
"ASGISendCallable",
"ASGI2Protocol",
"ASGI2Application",
"ASGI3Application",
"ASGIApplication",
)
class ASGIVersions(TypedDict):
spec_version: str
version: Union[Literal["2.0"], Literal["3.0"]]
class HTTPScope(TypedDict):
type: Literal["http"]
asgi: ASGIVersions
http_version: str
method: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
state: NotRequired[Dict[str, Any]]
extensions: Optional[Dict[str, Dict[object, object]]]
class WebSocketScope(TypedDict):
type: Literal["websocket"]
asgi: ASGIVersions
http_version: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
subprotocols: Iterable[str]
state: NotRequired[Dict[str, Any]]
extensions: Optional[Dict[str, Dict[object, object]]]
class LifespanScope(TypedDict):
type: Literal["lifespan"]
asgi: ASGIVersions
state: NotRequired[Dict[str, Any]]
WWWScope = Union[HTTPScope, WebSocketScope]
Scope = Union[HTTPScope, WebSocketScope, LifespanScope]
class HTTPRequestEvent(TypedDict):
type: Literal["http.request"]
body: bytes
more_body: bool
class HTTPResponseDebugEvent(TypedDict):
type: Literal["http.response.debug"]
info: Dict[str, object]
class HTTPResponseStartEvent(TypedDict):
type: Literal["http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
trailers: bool
class HTTPResponseBodyEvent(TypedDict):
type: Literal["http.response.body"]
body: bytes
more_body: bool
class HTTPResponseTrailersEvent(TypedDict):
type: Literal["http.response.trailers"]
headers: Iterable[Tuple[bytes, bytes]]
more_trailers: bool
class HTTPResponsePathsendEvent(TypedDict):
type: Literal["http.response.pathsend"]
path: str
class HTTPServerPushEvent(TypedDict):
type: Literal["http.response.push"]
path: str
headers: Iterable[Tuple[bytes, bytes]]
class HTTPDisconnectEvent(TypedDict):
type: Literal["http.disconnect"]
class WebSocketConnectEvent(TypedDict):
type: Literal["websocket.connect"]
class WebSocketAcceptEvent(TypedDict):
type: Literal["websocket.accept"]
subprotocol: Optional[str]
headers: Iterable[Tuple[bytes, bytes]]
class WebSocketReceiveEvent(TypedDict):
type: Literal["websocket.receive"]
bytes: Optional[bytes]
text: Optional[str]
class WebSocketSendEvent(TypedDict):
type: Literal["websocket.send"]
bytes: Optional[bytes]
text: Optional[str]
class WebSocketResponseStartEvent(TypedDict):
type: Literal["websocket.http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
class WebSocketResponseBodyEvent(TypedDict):
type: Literal["websocket.http.response.body"]
body: bytes
more_body: bool
class WebSocketDisconnectEvent(TypedDict):
type: Literal["websocket.disconnect"]
code: int
reason: Optional[str]
class WebSocketCloseEvent(TypedDict):
type: Literal["websocket.close"]
code: int
reason: Optional[str]
class LifespanStartupEvent(TypedDict):
type: Literal["lifespan.startup"]
class LifespanShutdownEvent(TypedDict):
type: Literal["lifespan.shutdown"]
class LifespanStartupCompleteEvent(TypedDict):
type: Literal["lifespan.startup.complete"]
class LifespanStartupFailedEvent(TypedDict):
type: Literal["lifespan.startup.failed"]
message: str
class LifespanShutdownCompleteEvent(TypedDict):
type: Literal["lifespan.shutdown.complete"]
class LifespanShutdownFailedEvent(TypedDict):
type: Literal["lifespan.shutdown.failed"]
message: str
ASGIReceiveEvent = Union[
HTTPRequestEvent,
HTTPDisconnectEvent,
WebSocketConnectEvent,
WebSocketReceiveEvent,
WebSocketDisconnectEvent,
LifespanStartupEvent,
LifespanShutdownEvent,
]
ASGISendEvent = Union[
HTTPResponseStartEvent,
HTTPResponseBodyEvent,
HTTPResponseTrailersEvent,
HTTPServerPushEvent,
HTTPDisconnectEvent,
WebSocketAcceptEvent,
WebSocketSendEvent,
WebSocketResponseStartEvent,
WebSocketResponseBodyEvent,
WebSocketCloseEvent,
LifespanStartupCompleteEvent,
LifespanStartupFailedEvent,
LifespanShutdownCompleteEvent,
LifespanShutdownFailedEvent,
]
ASGIReceiveCallable = Callable[[], Awaitable[ASGIReceiveEvent]]
ASGISendCallable = Callable[[ASGISendEvent], Awaitable[None]]
class ASGI2Protocol(Protocol):
def __init__(self, scope: Scope) -> None:
...
async def __call__(
self, receive: ASGIReceiveCallable, send: ASGISendCallable
) -> None:
...
ASGI2Application = Type[ASGI2Protocol]
ASGI3Application = Callable[
[
Scope,
ASGIReceiveCallable,
ASGISendCallable,
],
Awaitable[None],
]
ASGIApplication = Union[ASGI2Application, ASGI3Application]

View File

@@ -1,166 +0,0 @@
import sys
from tempfile import SpooledTemporaryFile
from asgiref.sync import AsyncToSync, sync_to_async
class WsgiToAsgi:
"""
Wraps a WSGI application to make it into an ASGI application.
"""
def __init__(self, wsgi_application):
self.wsgi_application = wsgi_application
async def __call__(self, scope, receive, send):
"""
ASGI application instantiation point.
We return a new WsgiToAsgiInstance here with the WSGI app
and the scope, ready to respond when it is __call__ed.
"""
await WsgiToAsgiInstance(self.wsgi_application)(scope, receive, send)
class WsgiToAsgiInstance:
"""
Per-socket instance of a wrapped WSGI application
"""
def __init__(self, wsgi_application):
self.wsgi_application = wsgi_application
self.response_started = False
self.response_content_length = None
async def __call__(self, scope, receive, send):
if scope["type"] != "http":
raise ValueError("WSGI wrapper received a non-HTTP scope")
self.scope = scope
with SpooledTemporaryFile(max_size=65536) as body:
# Alright, wait for the http.request messages
while True:
message = await receive()
if message["type"] != "http.request":
raise ValueError("WSGI wrapper received a non-HTTP-request message")
body.write(message.get("body", b""))
if not message.get("more_body"):
break
body.seek(0)
# Wrap send so it can be called from the subthread
self.sync_send = AsyncToSync(send)
# Call the WSGI app
await self.run_wsgi_app(body)
def build_environ(self, scope, body):
"""
Builds a scope and request body into a WSGI environ object.
"""
script_name = scope.get("root_path", "").encode("utf8").decode("latin1")
path_info = scope["path"].encode("utf8").decode("latin1")
if path_info.startswith(script_name):
path_info = path_info[len(script_name) :]
environ = {
"REQUEST_METHOD": scope["method"],
"SCRIPT_NAME": script_name,
"PATH_INFO": path_info,
"QUERY_STRING": scope["query_string"].decode("ascii"),
"SERVER_PROTOCOL": "HTTP/%s" % scope["http_version"],
"wsgi.version": (1, 0),
"wsgi.url_scheme": scope.get("scheme", "http"),
"wsgi.input": body,
"wsgi.errors": sys.stderr,
"wsgi.multithread": True,
"wsgi.multiprocess": True,
"wsgi.run_once": False,
}
# Get server name and port - required in WSGI, not in ASGI
if "server" in scope:
environ["SERVER_NAME"] = scope["server"][0]
environ["SERVER_PORT"] = str(scope["server"][1])
else:
environ["SERVER_NAME"] = "localhost"
environ["SERVER_PORT"] = "80"
if scope.get("client") is not None:
environ["REMOTE_ADDR"] = scope["client"][0]
# Go through headers and make them into environ entries
for name, value in self.scope.get("headers", []):
name = name.decode("latin1")
if name == "content-length":
corrected_name = "CONTENT_LENGTH"
elif name == "content-type":
corrected_name = "CONTENT_TYPE"
else:
corrected_name = "HTTP_%s" % name.upper().replace("-", "_")
# HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case
value = value.decode("latin1")
if corrected_name in environ:
value = environ[corrected_name] + "," + value
environ[corrected_name] = value
return environ
def start_response(self, status, response_headers, exc_info=None):
"""
WSGI start_response callable.
"""
# Don't allow re-calling once response has begun
if self.response_started:
raise exc_info[1].with_traceback(exc_info[2])
# Don't allow re-calling without exc_info
if hasattr(self, "response_start") and exc_info is None:
raise ValueError(
"You cannot call start_response a second time without exc_info"
)
# Extract status code
status_code, _ = status.split(" ", 1)
status_code = int(status_code)
# Extract headers
headers = [
(name.lower().encode("ascii"), value.encode("ascii"))
for name, value in response_headers
]
# Extract content-length
self.response_content_length = None
for name, value in response_headers:
if name.lower() == "content-length":
self.response_content_length = int(value)
# Build and send response start message.
self.response_start = {
"type": "http.response.start",
"status": status_code,
"headers": headers,
}
@sync_to_async
def run_wsgi_app(self, body):
"""
Called in a subthread to run the WSGI app. We encapsulate like
this so that the start_response callable is called in the same thread.
"""
# Translate the scope and incoming request body into a WSGI environ
environ = self.build_environ(self.scope, body)
# Run the WSGI app
bytes_sent = 0
for output in self.wsgi_application(environ, self.start_response):
# If this is the first response, include the response headers
if not self.response_started:
self.response_started = True
self.sync_send(self.response_start)
# If the application supplies a Content-Length header
if self.response_content_length is not None:
# The server should not transmit more bytes to the client than the header allows
bytes_allowed = self.response_content_length - bytes_sent
if len(output) > bytes_allowed:
output = output[:bytes_allowed]
self.sync_send(
{"type": "http.response.body", "body": output, "more_body": True}
)
bytes_sent += len(output)
# The server should stop iterating over the response when enough data has been sent
if bytes_sent == self.response_content_length:
break
# Close connection
if not self.response_started:
self.response_started = True
self.sync_send(self.response_start)
self.sync_send({"type": "http.response.body"})

View File

@@ -1,123 +0,0 @@
Metadata-Version: 2.4
Name: beautifulsoup4
Version: 4.14.2
Summary: Screen-scraping library
Project-URL: Download, https://www.crummy.com/software/BeautifulSoup/bs4/download/
Project-URL: Homepage, https://www.crummy.com/software/BeautifulSoup/bs4/
Author-email: Leonard Richardson <leonardr@segfault.org>
License: MIT License
License-File: AUTHORS
License-File: LICENSE
Keywords: HTML,XML,parse,soup
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Text Processing :: Markup :: HTML
Classifier: Topic :: Text Processing :: Markup :: SGML
Classifier: Topic :: Text Processing :: Markup :: XML
Requires-Python: >=3.7.0
Requires-Dist: soupsieve>1.2
Requires-Dist: typing-extensions>=4.0.0
Provides-Extra: cchardet
Requires-Dist: cchardet; extra == 'cchardet'
Provides-Extra: chardet
Requires-Dist: chardet; extra == 'chardet'
Provides-Extra: charset-normalizer
Requires-Dist: charset-normalizer; extra == 'charset-normalizer'
Provides-Extra: html5lib
Requires-Dist: html5lib; extra == 'html5lib'
Provides-Extra: lxml
Requires-Dist: lxml; extra == 'lxml'
Description-Content-Type: text/markdown
Beautiful Soup is a library that makes it easy to scrape information
from web pages. It sits atop an HTML or XML parser, providing Pythonic
idioms for iterating, searching, and modifying the parse tree.
# Quick start
```
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup("<p>Some<b>bad<i>HTML")
>>> print(soup.prettify())
<html>
<body>
<p>
Some
<b>
bad
<i>
HTML
</i>
</b>
</p>
</body>
</html>
>>> soup.find(string="bad")
'bad'
>>> soup.i
<i>HTML</i>
#
>>> soup = BeautifulSoup("<tag1>Some<tag2/>bad<tag3>XML", "xml")
#
>>> print(soup.prettify())
<?xml version="1.0" encoding="utf-8"?>
<tag1>
Some
<tag2/>
bad
<tag3>
XML
</tag3>
</tag1>
```
To go beyond the basics, [comprehensive documentation is available](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
# Links
* [Homepage](https://www.crummy.com/software/BeautifulSoup/bs4/)
* [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
* [Discussion group](https://groups.google.com/group/beautifulsoup/)
* [Development](https://code.launchpad.net/beautifulsoup/)
* [Bug tracker](https://bugs.launchpad.net/beautifulsoup/)
* [Complete changelog](https://git.launchpad.net/beautifulsoup/tree/CHANGELOG)
# Note on Python 2 sunsetting
Beautiful Soup's support for Python 2 was discontinued on December 31,
2020: one year after the sunset date for Python 2 itself. From this
point onward, new Beautiful Soup development will exclusively target
Python 3. The final release of Beautiful Soup 4 to support Python 2
was 4.9.3.
# Supporting the project
If you use Beautiful Soup as part of your professional work, please consider a
[Tidelift subscription](https://tidelift.com/subscription/pkg/pypi-beautifulsoup4?utm_source=pypi-beautifulsoup4&utm_medium=referral&utm_campaign=readme).
This will support many of the free software projects your organization
depends on, not just Beautiful Soup.
If you use Beautiful Soup for personal projects, the best way to say
thank you is to read
[Tool Safety](https://www.crummy.com/software/BeautifulSoup/zine/), a zine I
wrote about what Beautiful Soup has taught me about software
development.
# Building the documentation
The bs4/doc/ directory contains full documentation in Sphinx
format. Run `make html` in that directory to create HTML
documentation.
# Running the unit tests
Beautiful Soup supports unit test discovery using Pytest:
```
$ pytest
```

Some files were not shown because too many files have changed in this diff Show More