mirror of
https://github.com/Mesteriis/hassio-addons-avm.git
synced 2026-01-09 23:11:02 +01:00
update repository references and improve script handling
This commit is contained in:
44
hassio-google-drive-backup/AUTHENTICATION.md
Normal file
44
hassio-google-drive-backup/AUTHENTICATION.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Authentication with Google Drive
|
||||
This document describes how the addon (Home Assistant Google Drive Backup) authenticates with Google Drive and stores your credentials. It's geared toward those who wish to know more detail and is not necessary to take advantage of the full features of the addon. The document is provided in the interest of providing full transparency into how the add-on works. I've tried to describe this as plainly as possible, but it is technical and therefore may not be understandable to everyone. Feedback on its clarity is appreciated.
|
||||
|
||||
> This document describes how authentication works if you use the big blue "AUTHENTICATE WITH GOOGLE DRIVE" button in the addon. If you're using [your own Google Drive credentials](https://github.com/sabeechen/hassio-google-drive-backup/blob/master/LOCAL_AUTH.md), then none of this applies.
|
||||
|
||||
## Your Credentials and the Needed Permission
|
||||
To have access to any information in Google Drive, Google's authentication servers must be told that the add-on has the permission. The add-on uses [Google Drive's Rest API (v3)](https://developers.google.com/drive/api/v3/about-sdk) for communication and requests the [drive.file](https://developers.google.com/drive/api/v3/about-auth) permission *scope*. This *scope* means the add-on has access to files and folders that the add-on created, but nothing else. It can't see files you've added to Google Drive through their web interface or anywhere else. Google Drive's Rest API allows the addon to periodically check what backups are uploaded and upload new ones if necessary by making requests over the internet.
|
||||
|
||||
## Authentication with Google Services
|
||||
For reference, Google's documentation for how to authenticate users with the Google Drive REST API is [here](https://developers.google.com/drive/api/v3/about-auth). Authentication is handled through [OAuth 2.0](https://developers.google.com/identity/protocols/OAuth2), which means that the add-on never actually sees your Google username and password, only an opaque [security token](https://en.wikipedia.org/wiki/Access_token) used to verify that the addon has been given permission. More detail is provided about what that token is and where it is stored later in this document.
|
||||
|
||||
The way a web-based application would normally authenticate with a Google service (eg Google Drive) looks something like this:
|
||||
1. User navigates to the app's webpage, eg http://examplegoogleapp.com
|
||||
2. The app generates a URL to Google's servers (https://accounts.google.com) used to grant the app permission.
|
||||
3. User navigates there, enters their Google username and password, and confirms the intention to give the app some permission (eg one or more *scopes*).
|
||||
4. Google redirects the user back to the app's webpage with an access token appended to the URL (eg http://examplegoogleapp.com/authenticate?token=0x12345678)
|
||||
5. The app stores the access token (0x12345678 in this example), and then passes it back to Google whenever it wishes to make access the API on behalf of the user who logged in.
|
||||
|
||||
This access token allows the app to act as if it is the user who created it. In the case of this add-on, the permission granted by the drive.file scope allows it to create folders, upload backups, and retrieve the previously created folders. Because the add-on only ever sees the access token (not the username/password), and the access token only grants limited permissions, the add-on doesn't have a way to elevate its permission further to access other information in Google Drive or your Google account.
|
||||
|
||||
## Authentication for the Add-on
|
||||
|
||||
Google puts some limitations on how the access token must be generated that will be important for understanding how the add-on authenticates in reality:
|
||||
* When the user is redirected to https://accounts.google.com (step 2), the redirect must be from a known public website associated with the app.
|
||||
* When the user is redirected back to the app after authorization (step 4), the redirect must be a statically addressed and publicly accessible website.
|
||||
|
||||
These limitations make a technical problem for the addon because most people's Home Assistant instances aren't publicly accessible and the address is different for each one. Performing the authentication workflow exactly as described above won't work. To get around this, I (the developer of this addon) set up a website, https://habackup.io, which serves as the known public and statically addressable website that Google redirects from/to. The source code for this server is available within the add-on's GitHub repository.
|
||||
|
||||
So when you authenticate the add-on, the workflow looks like this:
|
||||
1. You start at the add-on's web interface, something like https://homeassistant.local:8123/ingress/hassio_google_drive_backup
|
||||
2. You click the "Authenticate With Google Drive" button, which takes note of the address of your Home Assistant installation (https://homeassistant.local:8123 in this case) and sends you to https://habackup.io/drive/authorize
|
||||
3. https://habackup.io immediately generates the Google login URL for you and redirects you to https://accounts.google.com
|
||||
4. You log in with your Google credentials on Google's domain, and confirm you want to give the add-on permission to see files and folders it creates (the drive.file scope)
|
||||
5. Google redirects you back to https://habackup.io, along with the access token that will be used for future authentication.
|
||||
6. https://habackup.io redirects you back to your add-on web-UI (which is kept track of in step 2) along with the access token.
|
||||
7. The addon (on your local Home Assistant installation) persists the access token and uses it in the future any time it needs to talk to Google Drive.
|
||||
|
||||
Notably, your access token isn't persisted at https://habackup.io, it is only passed through back to your local add-on installation. I do this because:
|
||||
- It ensures your information is only ever stored on your machine, which is reassuring from the user's perspective (eg you).
|
||||
- If my server (https://habackup.io) ever gets compromised, there isn't any valuable information stored there that compromises you as well.
|
||||
- This is practicing a form of [defense-in-depth](https://en.wikipedia.org/wiki/Defense_in_depth_%28computing%29) security, where-in [personal data](https://en.wikipedia.org/wiki/Personal_data) is only stored in the places where it is strictly critical.
|
||||
- It makes the server more simple since it is a stateless machine that doesn't require a database (eg to store your token).
|
||||
|
||||
After your token is generated and stored on your machine, it needs to be *refreshed* periodically with Google Drive. To do this, the addon will again ask https://habackup.io who will relay the request with Google Drive.
|
||||
123
hassio-google-drive-backup/BACKUP_AND_SNAPSHOT.md
Normal file
123
hassio-google-drive-backup/BACKUP_AND_SNAPSHOT.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# 'Snapshot' vs 'Backup'
|
||||
In August 2021 [the Home Assistant team announced](https://www.home-assistant.io/blog/2021/08/24/supervisor-update/) that 'snapshots' will be called 'backups' moving forward. This addon exposes a binary sensor to indicate if snapshots are stale and a another sensor that publishes details about backups. Both of the sensors used 'snapshot' in their names and values, so they had to be changed to match the new language. To prevent breaking any existing automations you might have, the addon will only start using the new names and values when you upgrade if you tell it to.
|
||||
|
||||
This can be controlled by using the configuration option ```call_backup_snapshot```, which will use the old names and values for sensors when it is true. If you updated the addon from a version that used to use 'snapshot' in it names, this option will be automatically added when you update to make sure it doesn't break any existing automations.
|
||||
|
||||
Here is a breakdown of what the new and old sensor values mean:
|
||||
|
||||
## Old sensor name/values
|
||||
These will be the sensor values used when ```call_backup_snapshot: True``` or if the addon is below version 0.105.1. The addon sets ```call_backup_snapshot: True``` automatically if you upgrade the addon from an older version.
|
||||
### Backup Stale Binary Sensor
|
||||
#### Entity Id:
|
||||
```yaml
|
||||
binary_sensor.snapshots_stale
|
||||
```
|
||||
#### Possible states:
|
||||
```yaml
|
||||
on
|
||||
off
|
||||
```
|
||||
#### Example Attributes:
|
||||
```yaml
|
||||
friendly_name: Snapshots Stale
|
||||
device_class: problem
|
||||
```
|
||||
### Backup State Sensor
|
||||
#### Entity Id:
|
||||
```yaml
|
||||
sensor.snapshot_backup
|
||||
```
|
||||
#### Possible States:
|
||||
```yaml
|
||||
error
|
||||
waiting
|
||||
backed_up
|
||||
```
|
||||
#### Example Attributes:
|
||||
```yaml
|
||||
friendly_name: Snapshots State
|
||||
last_snapshot: 2021-09-01T20:26:49.100376+00:00
|
||||
snapshots_in_google_drive: 2
|
||||
snapshots_in_hassio: 2
|
||||
snapshots_in_home_assistant: 2
|
||||
size_in_google_drive: 2.5 GB
|
||||
size_in_home_assistant: 2.5 GB
|
||||
snapshots:
|
||||
- name: Full Snapshot 2021-02-06 11:37:00
|
||||
date: '2021-02-06T18:37:00.916510+00:00'
|
||||
state: Backed Up
|
||||
slug: DFG123
|
||||
- name: Full Snapshot 2021-02-07 11:00:00
|
||||
date: '2021-02-07T18:00:00.916510+00:00'
|
||||
state: Backed Up
|
||||
slug: DFG124
|
||||
```
|
||||
|
||||
## New Sensor Names/Values
|
||||
These will be the sensor values used when ```call_backup_snapshot: False``` or if the configuration option is un-set. New installations of the addon will default to this.
|
||||
### Backup Stale Binary Sensor
|
||||
#### Entity Id
|
||||
```yaml
|
||||
binary_sensor.backups_stale
|
||||
```
|
||||
#### Possible States
|
||||
```yaml
|
||||
on
|
||||
off
|
||||
```
|
||||
#### Example Attributes:
|
||||
```yaml
|
||||
friendly_name: Backups Stale
|
||||
device_class: problem
|
||||
```
|
||||
### Backup State Sensor
|
||||
#### Entity Id
|
||||
```yaml
|
||||
sensor.backup_state
|
||||
```
|
||||
#### Possible States
|
||||
```yaml
|
||||
error
|
||||
waiting
|
||||
backed_up
|
||||
```
|
||||
#### Example Attributes:
|
||||
```yaml
|
||||
friendly_name: Backup State
|
||||
last_backup: 2021-09-01T20:26:49.100376+00:00
|
||||
last_upload: 2021-09-01T20:26:49.100376+00:00
|
||||
backups_in_google_drive: 2
|
||||
backups_in_home_assistant: 2
|
||||
size_in_google_drive: 2.5 GB
|
||||
size_in_home_assistant: 2.5 GB
|
||||
backups:
|
||||
- name: Full Snapshot 2021-02-06 11:37:00
|
||||
date: '2021-02-06T18:37:00.916510+00:00
|
||||
state: Backed Up
|
||||
slug: DFG123
|
||||
- name: Full Snapshot 2021-02-07 11:00:00
|
||||
date: '2021-02-07T18:00:00.916510+00:00'
|
||||
state: Backed Up
|
||||
slug: DFG124
|
||||
```
|
||||
|
||||
### What do the values mean?
|
||||
```binary_sensor.backups_stale``` is "on" when backups are stale and "off"" otherwise. Backups are stale when the addon is 6 hours past a scheduled backup and no new backup has been made. This delay is in place to avoid triggerring on transient errors (eg internet connectivity problems or one-off problems in Home Assistant).
|
||||
|
||||
```sensor.backup_state``` is:
|
||||
- ```waiting``` when the addon is first booted up or hasn't been connected to Google Drive yet.
|
||||
- ```error``` immediately after any error is encountered, even transient ones.
|
||||
- ```backed_up``` when everything is running fine without errors.
|
||||
|
||||
It's attributes are:
|
||||
- ```last_backup``` The UTC ISO-8601 date of the most recent backup in Home Assistant or Google Drive.
|
||||
- ```last_upload``` The UTC ISO-8601 date of the most recent backup uploaded to Google Drive.
|
||||
- ```backups_in_google_drive``` The number of backups in Google Drive.
|
||||
- ```backups_in_home_assistant``` The number of backups in Home Assistant.
|
||||
- ```size_in_google_drive``` A string representation of the space used by backups in Google Drive.
|
||||
- ```size_in_home_assistant``` A string representation of the space used by backups in Home Assistant.
|
||||
- ```backups``` The list of each snapshot in decending order of date. Each snapshot includes its ```name```, ```date```, ```slug```, and ```state```. ```state``` can be one of:
|
||||
- ```Backed Up``` if its in Home Assistant and Google Drive.
|
||||
- ```HA Only``` if its only in Home Assistant.
|
||||
- ```Drive Only``` if its only in Google Drive.
|
||||
- ```Pending``` if the snapshot was requested but not yet complete.
|
||||
43
hassio-google-drive-backup/CHANGELOG.md
Normal file
43
hassio-google-drive-backup/CHANGELOG.md
Normal file
@@ -0,0 +1,43 @@
|
||||
## v0.112.1 [2023-11-03]
|
||||
|
||||
- Added warnings about using the "Stop Addons" feature. I plan on removing this in the near future. If you'd like to keep the feature around, please give your feedback in [this GitHub issue](https://github.com/sabeechen/hassio-google-drive-backup/issues/940).
|
||||
- When backups are stuck in the "pending" state, the addon now provides you with the Supervisor logs to help figure out whats wrong.
|
||||
- Added support for the "exclude Home Assistant database" options for automatic backups
|
||||
- Added configuration options to limit the speed of uploads to Google Drive
|
||||
- When Google Drive doesn't have enough space, the addon now explains how much space you're using and how much is left. This was a source of confusion for users.
|
||||
- When the addon halts because it needs to delete more than one backup, it now tells you which backups will be deleted.
|
||||
- Fixed a bug when using "stop addons" that prevented it from recognizing addons in the "starting" state.
|
||||
- The addon's containers are now donwloaded from Github (previously was DockerHub)
|
||||
- Added another redundant token provider, hosted on heroku, that the addon uses for its cloud-required component when you aren't using your own google app credentials.
|
||||
|
||||
## v0.111.1 [2023-06-19]
|
||||
|
||||
- Support for the new network storage features in Home Assistant. The addon will now create backups in what Home Assistant has configured as its default backup location. This can be overridden in the addon's settings.
|
||||
- Raised the addon's required permissions to "Admin" in order to access the supervisor's mount API.
|
||||
- Fixed a CSS error causing toast messages to render partially off screen on small displays.
|
||||
- Fixed misreporting of some error codes from Google Drive when a partial upload can't be resumed.
|
||||
|
||||
## v0.110.4 [2023-04-28]
|
||||
|
||||
- Fix a whitespace error causing authorization to fail.
|
||||
|
||||
## v0.110.3 [2023-03-24]
|
||||
|
||||
- Fix an error causing "Days Between Backups" to be ignored when "Time of Day" for a backup is set.
|
||||
- Fix a bug causing some timezones to make the addon to fail to start.
|
||||
|
||||
## v0.110.2 [2023-03-24]
|
||||
|
||||
- Fix a potential cause of SSL errors when communicating with Google Drive
|
||||
- Fix a bug causing backups to be requested indefinitely if scheduled during DST transitions.
|
||||
|
||||
## v0.110.1 [2023-01-09]
|
||||
|
||||
- Adds some additional options for donating
|
||||
- Mitgigates SD card corruption by redundantly storing config files needed for addon startup.
|
||||
- Avoid global throttling of Google Drive API calls by:
|
||||
- Making sync intervals more spread out and a little random.
|
||||
- Syncing more selectively when there are modifications to the /backup directory.
|
||||
- Caching data from Google Drive for short periods during periodic syncing.
|
||||
- Backing off for a longer time (2 hours) when the addon hits permanent errors.
|
||||
- Fixes CSS issues that made the logs page hard to use.
|
||||
205
hassio-google-drive-backup/DOCS.md
Normal file
205
hassio-google-drive-backup/DOCS.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Home Assistant Add-on: Google Assistant SDK
|
||||
|
||||
## Installation
|
||||
|
||||
To install the add-on, first follow the installation steps from the [README on GitHub](https://github.com/sabeechen/hassio-google-drive-backup#installation).
|
||||
|
||||
## Configuration
|
||||
|
||||
_Note_: The configuration can be changed easily by starting the add-on and clicking `Settings` in the web UI.
|
||||
The UI explains what each setting is and you don't need to modify anything before clicking `Start`.
|
||||
If you would still prefer to modify the settings in yaml, the options are detailed below.
|
||||
|
||||
### Add-on configuration example
|
||||
Don't use this directly, the addon has a lot of configuration options that most users don't need or want:
|
||||
|
||||
```yaml
|
||||
# Keep 10 backups in Home Assistant
|
||||
max_backups_in_ha: 10
|
||||
|
||||
# Keep 10 backups in Google Drive
|
||||
max_backups_in_google_drive: 10
|
||||
|
||||
# Create backups in Home Assistant on network storage
|
||||
backup_location: my_nfs_share
|
||||
|
||||
# Ignore backups the add-on hasn't created
|
||||
ignore_other_backups: True
|
||||
|
||||
# Ignore backups that look like they were created by Home Assistant automatic backup option during upgrades
|
||||
ignore_upgrade_backups: True
|
||||
|
||||
# Automatically delete "ignored" snapshots after this many days
|
||||
delete_ignored_after_days: 7
|
||||
|
||||
# Take a backup every 3 days
|
||||
days_between_backups: 3
|
||||
|
||||
# Create backups at 1:30pm exactly
|
||||
backup_time_of_day: "13:30"
|
||||
|
||||
# Delete backups from Home Assistant immediately after uploading them to Google Drive
|
||||
delete_after_upload: True
|
||||
|
||||
# Manually specify the backup folder used in Google Drive
|
||||
specify_backup_folder: true
|
||||
|
||||
# Use a dark and red theme
|
||||
background_color: "#242424"
|
||||
accent_color: "#7D0034"
|
||||
|
||||
# Use a password for backup archives. Use "!secret secret_name" to use a password form your secrets file
|
||||
backup_password: "super_secret"
|
||||
|
||||
# Create backup names like 'Full Backup HA 0.92.0'
|
||||
backup_name: "{type} Backup HA {version_ha}"
|
||||
|
||||
# Keep a backup once every day for 3 days and once a week for 4 weeks
|
||||
generational_days: 3
|
||||
generational_weeks: 4
|
||||
|
||||
# Create partial backups with no folders and no configurator add-on
|
||||
exclude_folders: "homeassistant,ssl,share,addons/local,media"
|
||||
exclude_addons: "core_configurator"
|
||||
|
||||
# Turn off notifications and staleness sensor
|
||||
enable_backup_stale_sensor: false
|
||||
notify_for_stale_backups: false
|
||||
|
||||
# Enable server directly on port 1627
|
||||
expose_extra_server: true
|
||||
|
||||
# Allow sending error reports
|
||||
send_error_reports: true
|
||||
|
||||
# Delete backups after they're uploaded to Google Drive
|
||||
delete_after_upload: true
|
||||
```
|
||||
|
||||
### Option: `max_backups_in_ha` (default: 4)
|
||||
|
||||
The number of backups the add-on will allow Home Assistant to store locally before old ones are deleted.
|
||||
|
||||
### Option: `max_backups_in_google_drive` (default: 4)
|
||||
|
||||
The number of backups the add-on will keep in Google Drive before old ones are deleted. Google Drive gives you 15GB of free storage (at the time of writing) so plan accordingly if you know how big your backups are.
|
||||
|
||||
### Option: `backup_location` (default: None)
|
||||
The place where backups are created in Home Assistant before uploading to Google Drive. Can be "local-disk" or the name of any backup network storage you've configured in Home Assistant. Leave unspecified (the default) to have backups created in whatever Home Assistant uses as the default backup location.
|
||||
|
||||
### Option: `ignore_other_backups` (default: False)
|
||||
Make the addon ignore any backups it didn't directly create. Any backup already uploaded to Google Drive will not be ignored until you delete it from Google Drive.
|
||||
|
||||
### Option: `ignore_upgrade_backups` (default: False)
|
||||
Ignores backups that look like they were automatically created from updating an add-on or Home Assistant itself. This will make the add-on ignore any partial backup that has only one add-on or folder in it.
|
||||
|
||||
### Option: `days_between_backups` (default: 3)
|
||||
|
||||
How often a new backup should be scheduled, eg `1` for daily and `7` for weekly.
|
||||
|
||||
### Option: `backup_time_of_day`
|
||||
|
||||
The time of day (local time) that new backups should be created in 24-hour ("HH:MM") format. When not specified backups are created at (roughly) the same time of day as the most recent backup.
|
||||
|
||||
|
||||
### Options: `delete_after_upload` (default: False)
|
||||
|
||||
Deletes backups from Home Assistant immediately after uploading them to Google Drive. This is useful if you have very limited space inside Home Assistant since you only need to have available space for a single backup locally.
|
||||
|
||||
### Option: `specify_backup_folder` (default: False)
|
||||
|
||||
When true, you must select the folder in Google Drive where backups are stored. Once you turn this on, restart the add-on and visit the Web-UI to be prompted to select the backup folder.
|
||||
|
||||
### Option: `background_color` and `accent_color`
|
||||
|
||||
The background and accent colors for the web UI. You can use this to make the UI fit in with whatever color scheme you use in Home Assistant. When unset, the interface matches Home Assistant's default blue/white style.
|
||||
|
||||
### Option: `backup_password`
|
||||
|
||||
When set, backups are created with a password. You can use a value from your secrets.yaml by prefixing the password with "!secret". You'll need to remember this password when restoring a backup.
|
||||
|
||||
> Example: Use a password for backup archives
|
||||
>
|
||||
> ```yaml
|
||||
> backup_password: "super_secret"
|
||||
> ```
|
||||
>
|
||||
> Example: Use a password from secrets.yaml
|
||||
>
|
||||
> ```yaml
|
||||
> backup_password: "!secret backup_password"
|
||||
> ```
|
||||
|
||||
### Option: `backup_name` (default: "{type} Backup {year}-{month}-{day} {hr24}:{min}:{sec}")
|
||||
|
||||
Sets the name for new backups. Variable parameters of the form `{variable_name}` can be used to modify the name to your liking. A list of available variables is available [here](https://github.com/sabeechen/hassio-google-drive-backup#can-i-give-backups-a-different-name).
|
||||
|
||||
### Option: `generational_*`
|
||||
|
||||
When set, older backups will be kept longer using a [generational backup scheme](https://en.wikipedia.org/wiki/Backup_rotation_scheme). See the [question here](https://github.com/sabeechen/hassio-google-drive-backup#can-i-keep-older-backups-for-longer) for configuration options.
|
||||
|
||||
### Option: `exclude_folders`
|
||||
|
||||
When set, excludes the comma-separated list of folders by creating a partial backup.
|
||||
|
||||
### Option: `exclude_addons`
|
||||
|
||||
When set, excludes the comma-separated list of addons by creating a partial backup.
|
||||
|
||||
_Note_: Folders and add-ons must be identified by their "slug" name. It is recommended to use the `Settings` dialog within the add-on web UI to configure partial backups since these names are esoteric and hard to find.
|
||||
|
||||
### Option: `enable_backup_stale_sensor` (default: True)
|
||||
|
||||
When false, the add-on will not publish the [binary_sensor.backups_stale](https://github.com/sabeechen/hassio-google-drive-backup#how-will-i-know-this-will-be-there-when-i-need-it) stale sensor.
|
||||
|
||||
### Option: `enable_backup_state_sensor` (default: True)
|
||||
|
||||
When false, the add-on will not publish the [sensor.backup_state](https://github.com/sabeechen/hassio-google-drive-backup#how-will-i-know-this-will-be-there-when-i-need-it) sensor.
|
||||
|
||||
### Option: `notify_for_stale_backups` (default: True)
|
||||
|
||||
When false, the add-on will send a [persistent notification](https://github.com/sabeechen/hassio-google-drive-backup#how-will-i-know-this-will-be-there-when-i-need-it) in Home Assistant when backups are stale.
|
||||
|
||||
---
|
||||
|
||||
### UI Server Options
|
||||
|
||||
The UI is available through Home Assistant [ingress](https://www.home-assistant.io/blog/2019/04/15/hassio-ingress/).
|
||||
|
||||
It can also be exposed through a web server on port `1627`, which you can map to an externally visible port from the add-on `Network` panel. You can configure a few more options to add SSL or require your Home Assistant username/password.
|
||||
|
||||
#### Option: `expose_extra_server` (default: False)
|
||||
|
||||
Expose the webserver on port `1627`. This is optional, as the add-on is already available with Home Assistant ingress.
|
||||
|
||||
#### Option: `require_login` (default: False)
|
||||
|
||||
When true, requires your home assistant username and password to access the Web UI.
|
||||
|
||||
#### Option: `use_ssl` (default: False)
|
||||
|
||||
When true, the Web UI exposed by `expose_extra_server` will be served over SSL (HTTPS).
|
||||
|
||||
#### Option: `certfile` (default: `/ssl/certfile.pem`)
|
||||
|
||||
Required when `use_ssl: True`. The path to your SSL key file
|
||||
|
||||
#### Option: `keyfile` (default: `/ssl/keyfile.pem`)
|
||||
|
||||
Required when `use_ssl: True`. The path to your SSL cert file.
|
||||
|
||||
#### Option: `verbose` (default: False)
|
||||
|
||||
If true, enable additional debug logging. Useful if you start seeing errors and need to file a bug with me.
|
||||
|
||||
#### Option: `send_error_reports` (default: False)
|
||||
|
||||
When true, the text of unexpected errors will be sent to a database maintained by the developer. This helps identify problems with new releases and provide better context messages when errors come up.
|
||||
|
||||
#### Option: `delete_after_upload` (default: False)
|
||||
|
||||
When true, backups are always deleted after they've been uploaded to Google Drive. 'max_backups_in_ha' is ignored when this option is True, since a backup is always deleted from Home Assistant after it gets uploaded to Google Drive. Some find this useful if they only have enough space on their Home Assistant machine for one backup.
|
||||
|
||||
## FAQ
|
||||
|
||||
Read the [FAQ on GitHub](https://github.com/sabeechen/hassio-google-drive-backup#faq).
|
||||
12
hassio-google-drive-backup/Dockerfile
Normal file
12
hassio-google-drive-backup/Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
||||
ARG BUILD_FROM
|
||||
FROM $BUILD_FROM
|
||||
WORKDIR /app
|
||||
COPY . /app
|
||||
RUN chmod +x addon_deps.sh
|
||||
RUN ./addon_deps.sh
|
||||
RUN pip3 install .
|
||||
COPY config.json /usr/local/lib/python3.11/site-packages/config.json
|
||||
|
||||
EXPOSE 1627
|
||||
EXPOSE 8099
|
||||
ENTRYPOINT ["python3", "-m", "backup"]
|
||||
16
hassio-google-drive-backup/Dockerfile-server
Normal file
16
hassio-google-drive-backup/Dockerfile-server
Normal file
@@ -0,0 +1,16 @@
|
||||
# Use the official lightweight Python image.
|
||||
# https://hub.docker.com/_/python
|
||||
FROM python:3.11-buster
|
||||
|
||||
# Copy local code to the container image.
|
||||
ENV APP_HOME /server
|
||||
WORKDIR $APP_HOME
|
||||
COPY . ./
|
||||
COPY config.json /usr/local/lib/python3.11/site-packages/config.json
|
||||
|
||||
# Install server python requirements
|
||||
RUN pip3 install --trusted-host pypi.python.org -r requirements-server.txt
|
||||
RUN pip3 install .
|
||||
|
||||
WORKDIR /
|
||||
ENTRYPOINT ["python3", "-m", "backup.server"]
|
||||
41
hassio-google-drive-backup/GENERATIONAL_BACKUP.md
Normal file
41
hassio-google-drive-backup/GENERATIONAL_BACKUP.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Generational Backup
|
||||
Generational backup lets you keep a longer history of backups on daily, weekly, monthly, and yearly cycles. This is in contrast to the "regular" scheme for keeping history backups, which will always just delete the oldest backup when needed. This has the effect of keeping older backups around for a longer time, which is particularly useful if you've made a bad configuration change but didn't notice until several days later.
|
||||
|
||||
## Configuration
|
||||
The generational backup will be used when any one of `generational_days`, `generational_weeks`, `generational_months`, or `generational_years` is greater than zero. All of the available configuration options are given below, but utes much easier to configure from the Settings dialog accessible from the "Settings" menu at the top of the web UI.
|
||||
* `generational_days` (int): The number of days to keep
|
||||
* `generational_weeks` (int): The number of weeks to keep
|
||||
* `generational_months` (int): The number of months to keep
|
||||
* `generational_years` (int): The number of years to keep
|
||||
* `generational_day_of_week` (str): The day of the week when weekly backups will be kept. It can be one of 'mon', 'tue', 'wed', 'thu', 'fri', 'sat' or 'sun'. The default is 'mon'.
|
||||
* `generational_day_of_month` (int): The day of the month when monthly backups will be kept, from 1 to 31. If a month has less than the configured number of days, the latest day of that month is used.
|
||||
* `generational_day_of_year` (int): The day of the year that yearly backups are kept, from 1 to 365.
|
||||
|
||||
## Some Details to Consider
|
||||
* Generational backup assumes that a backup is available for every day to work properly, so it's recommended that you set `days_between_backups`=1 if you're using the feature. Otherwise, a backup may not be available to be saved for a given day.
|
||||
* The backups maintained by generational backup will still never exceed the number you permit to be maintained in Google Drive or Home Assistant. For example, if `max_backups_in_google_drive`=3 and `generational_weeks`=4, then only 3 weeks of backups will be kept in Google Drive.
|
||||
* Generational backup will only delete older backups when it has to. For example, if you've configured it to keep 5 weekly backups on Monday, you've been running it for a week (so you have 7 backups), and `max_backups_in_google_drive`=7, then your backups on Tuesday, Wednesday, etc won't get deleted yet. They won't get deleted until doing so is necessary to keep older backups around without violating the maximum allowed in Google Drive.
|
||||
>Note: You can configure the addon to delete backups more aggressively by setting `generational_delete_early`=true. With this, the addon will delete old backups that don't match a daily, weekly, monthly, or yearly configured cycle even if you aren't yet at risk of exceeding `max_backups_in_ha` or `max_backups_in_google_drive`. Careful though! You can accidentally delete all your backups this way if you don't have all your settings configured just the way you want them.
|
||||
* If more than one backup is created for a day (for example if you create one manually) then only the latest backup from that day will be kept.
|
||||
|
||||
## Schedule
|
||||
Figuring out date math in your head is hard, so it's useful to see a concrete example. Consider you have the following configuration. Two backups for each day, week, month, and year along with a limit in Google drive large enough to accommodate them all:
|
||||
```json
|
||||
"days_between_backups": 1,
|
||||
"generational_days": 2,
|
||||
"generational_weeks": 2,
|
||||
"generational_months": 2
|
||||
"generational_years": 2
|
||||
"max_backups_in_google_drive": 8
|
||||
```
|
||||
Imagine you've been running the add-on for 2 years now, diligently making a backup every day with no interruptions. On 19 May 2021, you could expect your list of backups in Google Drive to look like this:
|
||||
- May 19, 2021 <-- 1st Daily backup
|
||||
- May 18, 2021 <-- 2nd Daily backup
|
||||
- May 13, 2021 <-- 1st Weekly backup
|
||||
- May 06, 2021 <-- 2nd Weekly backup
|
||||
- May 01, 2021 <-- 1st Monthly backup
|
||||
- April 01, 2021 <-- 2nd Monthly backup
|
||||
- January 01, 2021 <-- 1st Yearly backup
|
||||
- January 01, 2020 <-- 2nd Yearly backup
|
||||
|
||||
Note that sometimes a day might overlap more than one schedule. For example, a backup on January 1st could satisfy the constraints for both a yearly and monthly backup. In this case, the add-on will only delete older backups when it *must* to keep from exceeding `max_backups_in_ha` or `max_backups_in_google_drive`. Thus, the most recent backup that would otherwise be deleted will be kept until space is needed somewhere else in the schedule.
|
||||
34
hassio-google-drive-backup/README.md
Normal file
34
hassio-google-drive-backup/README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Home Assistant Add-on: Google Drive Backup
|
||||
|
||||
A complete and easy way to upload your Home Assistant backups to Google Drive.
|
||||
|
||||
## About
|
||||
|
||||
Quickly set up a backup strategy without much fuss. It doesn't require much familiarity with Home Assistant, its architecture, or Google Drive. Detailed install instructions are provided below but you can just add the repo, click install and open the Web UI. It will tell you what to do and only takes a few simple clicks.
|
||||
|
||||
>This project requires financial support to make the Google Drive integration work, but it is free for you to use. You can join those helping to keep the lights on at:
|
||||
>
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/bmc-button.svg" width=150 height=40 style="margin: 5px"/>](https://www.buymeacoffee.com/sabeechen)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/paypal-button.svg" width=150 height=40 style="margin: 5px"/>](https://www.paypal.com/paypalme/stephenbeechen)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/patreon-button.svg" width=150 height=40 style="margin: 5px"/>](https://www.patreon.com/bePatron?u=4064183)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/github-sponsors-button.svg" width=150 height=40 style="margin: 5px"/>](https://github.com/sponsors/sabeechen)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/monero-button.svg" width=150 height=40 style="margin: 5px"/>](https://github.com/sabeechen/hassio-google-drive-backup/blob/master/donate-crypto.md)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/bitcoin-button.svg" width=150 height=40 style="margin: 5px"/>](https://github.com/sabeechen/hassio-google-drive-backup/blob/master/donate-crypto.md)
|
||||
>[<img src="https://raw.githubusercontent.com/sabeechen/hassio-google-drive-backup/master/images/ethereum-button.svg" width=150 height=40 style="margin: 5px"/>](https://github.com/sabeechen/hassio-google-drive-backup/blob/master/donate-crypto.md)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
- Creates backups on a configurable schedule.
|
||||
- Uploads backups to Drive, even the ones it didn't create.
|
||||
- Clean up old backups in Home Assistant and Google Drive, so you don't run out of space.
|
||||
- Restore from a fresh install or recover quickly from disaster by uploading your backups directly from Google Drive.
|
||||
- Integrates with Home Assistant Notifications, and provides sensors you can trigger off of.
|
||||
- Notifies you when something goes wrong with your backups.
|
||||
- Super easy installation and configuration.
|
||||
- Privacy-centric design philosophy.
|
||||
- Comprehensive documentation.
|
||||
- _Most certainly_ doesn't mine bitcoin on your home automation server. Definitely no.
|
||||
|
||||
See the [README on GitHub](https://github.com/sabeechen/hassio-google-drive-backup) for all the details, or just install the add-on and open the Web UI.
|
||||
The Web-UI explains everything you have to do.
|
||||
7
hassio-google-drive-backup/addon_deps.sh
Normal file
7
hassio-google-drive-backup/addon_deps.sh
Normal file
@@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
apk add python3 fping linux-headers libc-dev libffi-dev python3-dev gcc py3-pip
|
||||
pip3 install --upgrade pip wheel setuptools
|
||||
pip3 install --trusted-host pypi.python.org -r requirements-addon.txt
|
||||
# Remove packages we only needed for installation
|
||||
apk del linux-headers libc-dev libffi-dev python3-dev gcc
|
||||
17
hassio-google-drive-backup/cloudbuild-dev.yaml
Normal file
17
hassio-google-drive-backup/cloudbuild-dev.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
# How to use:
|
||||
# cd hassio-google-drive-backup
|
||||
# gcloud config set project hassio-drive-backup
|
||||
# gcloud builds submit --config cloudbuild-dev.yaml --substitutions _DOCKERHUB_PASSWORD=<PASSWORD>
|
||||
|
||||
steps:
|
||||
- name: "gcr.io/cloud-builders/docker"
|
||||
entrypoint: "bash"
|
||||
args: ["-c", "docker login --username=sabeechen --password=${_DOCKERHUB_PASSWORD}"]
|
||||
- name: 'gcr.io/cloud-builders/docker'
|
||||
args: [ 'build', '-f', 'Dockerfile-addon', '-t', 'sabeechen/hassio-google-drive-backup-dev-amd64:${_VERSION}', "--build-arg", "BUILD_FROM=homeassistant/amd64-base", '.' ]
|
||||
substitutions:
|
||||
_DOCKERHUB_PASSWORD: "define me" # default value
|
||||
_VERSION: "dev-testing" # default value
|
||||
images:
|
||||
- "sabeechen/hassio-google-drive-backup-dev-amd64:${_VERSION}"
|
||||
|
||||
22
hassio-google-drive-backup/cloudbuild-server.yaml
Normal file
22
hassio-google-drive-backup/cloudbuild-server.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
# How to use:
|
||||
# gcloud config set project hassio-drive-backup
|
||||
# gcloud builds submit --config cloudbuild-server.yaml
|
||||
|
||||
#steps:
|
||||
#- name: 'gcr.io/cloud-builders/docker'
|
||||
# args: [ 'build', '-f', 'Dockerfile-server', '-t', 'gcr.io/$PROJECT_ID/authserver', '.' ]
|
||||
#images:
|
||||
#- 'gcr.io/$PROJECT_ID/authserver'
|
||||
|
||||
steps:
|
||||
# Build the container image
|
||||
- name: 'gcr.io/cloud-builders/docker'
|
||||
args: ['build', '-f', 'Dockerfile-server', '-t', 'gcr.io/$PROJECT_ID/${_SERVICE_NAME}:${_VERSION}', '.']
|
||||
# Push the container image to Container Registry
|
||||
- name: 'gcr.io/cloud-builders/docker'
|
||||
args: ['push', 'gcr.io/$PROJECT_ID/${_SERVICE_NAME}:${_VERSION}']
|
||||
substitutions:
|
||||
_SERVICE_NAME: "authserver-dev" # default value
|
||||
_VERSION: "test-deployment" # default value
|
||||
images:
|
||||
- 'gcr.io/$PROJECT_ID/${_SERVICE_NAME}:${_VERSION}'
|
||||
110
hassio-google-drive-backup/config.json
Normal file
110
hassio-google-drive-backup/config.json
Normal file
@@ -0,0 +1,110 @@
|
||||
{
|
||||
"name": "Home Assistant Google Drive Backup",
|
||||
"version": "0.112.1",
|
||||
"slug": "hassio_google_drive_backup",
|
||||
"description": "Automatically manage backups between Home Assistant and Google Drive",
|
||||
"arch": ["armhf", "armv7", "aarch64", "amd64", "i386"],
|
||||
"url": "https://github.com/sabeechen/hassio-google-drive-backup",
|
||||
"homeassistant_api": true,
|
||||
"hassio_api": true,
|
||||
"hassio_role": "admin",
|
||||
"auth_api": true,
|
||||
"ingress": true,
|
||||
"panel_icon": "mdi:cloud",
|
||||
"panel_title": "Backups",
|
||||
"map": ["ssl", "backup:rw", "config"],
|
||||
"options": {
|
||||
"max_backups_in_ha": 4,
|
||||
"max_backups_in_google_drive": 4,
|
||||
"days_between_backups": 3
|
||||
},
|
||||
"schema": {
|
||||
"max_backups_in_ha": "int(0,)?",
|
||||
"max_backups_in_google_drive": "int(0,)?",
|
||||
"days_between_backups": "float(0,)?",
|
||||
"ignore_other_backups": "bool?",
|
||||
"ignore_upgrade_backups": "bool?",
|
||||
"backup_storage": "str?",
|
||||
|
||||
"delete_after_upload": "bool?",
|
||||
"delete_before_new_backup": "bool?",
|
||||
"verbose": "bool?",
|
||||
"use_ssl": "bool?",
|
||||
"certfile": "str?",
|
||||
"keyfile": "str?",
|
||||
"require_login": "bool?",
|
||||
|
||||
"backup_name": "str?",
|
||||
"backup_time_of_day": "match(^[0-2]\\d:[0-5]\\d$)?",
|
||||
"specify_backup_folder": "bool?",
|
||||
"warn_for_low_space": "bool?",
|
||||
"watch_backup_directory": "bool?",
|
||||
"trace_requests": "bool?",
|
||||
|
||||
"generational_days": "int(0,)?",
|
||||
"generational_weeks": "int(0,)?",
|
||||
"generational_months": "int(0,)?",
|
||||
"generational_years": "int(0,)?",
|
||||
"generational_day_of_year": "int(1,365)?",
|
||||
"generational_day_of_month": "int(1,31)?",
|
||||
"generational_day_of_week": "list(mon|tue|wed|thu|fri|sat|sun)?",
|
||||
"generational_delete_early": "bool?",
|
||||
|
||||
"notify_for_stale_backups": "bool?",
|
||||
"enable_backup_stale_sensor": "bool?",
|
||||
"enable_backup_state_sensor": "bool?",
|
||||
"send_error_reports": "bool?",
|
||||
"backup_password": "str?",
|
||||
"exclude_folders": "str?",
|
||||
"exclude_addons": "str?",
|
||||
"exclude_ha_database": "bool?",
|
||||
"stop_addons": "str?",
|
||||
"disable_watchdog_when_stopping": "bool?",
|
||||
"expose_extra_server": "bool?",
|
||||
"drive_experimental": "bool?",
|
||||
"drive_ipv4": "match(^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}$)?",
|
||||
"ignore_ipv6_addresses": "bool?",
|
||||
"confirm_multiple_deletes": "bool?",
|
||||
"google_drive_timeout_seconds": "float(1,)?",
|
||||
"alternate_dns_servers": "match(^([0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3})(,[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3})*$)?",
|
||||
"enable_drive_upload": "bool?",
|
||||
"call_backup_snapshot": "bool?",
|
||||
|
||||
"background_color": "match(^(#[0-9ABCDEFabcdef]{6}|)$)?",
|
||||
"accent_color": "match(^(#[0-9ABCDEFabcdef]{6}|)$)?",
|
||||
|
||||
"max_sync_interval_seconds": "float(300,)?",
|
||||
"default_sync_interval_variation": "float(0,1)?",
|
||||
"port": "int(0,)?",
|
||||
"debugger_port": "int(100,)?",
|
||||
"log_level": "list(DEBUG|TRACE|INFO|WARN|CRITICAL|WARNING)?",
|
||||
"console_log_level": "list(DEBUG|TRACE|INFO|WARN|CRITICAL|WARNING)?",
|
||||
"max_backoff_seconds": "int(3600,)?",
|
||||
|
||||
"max_snapshots_in_hassio": "int(0,)?",
|
||||
"max_snapshots_in_google_drive": "int(0,)?",
|
||||
"days_between_snapshots": "float(0,)?",
|
||||
"ignore_other_snapshots": "bool?",
|
||||
"ignore_upgrade_snapshots": "bool?",
|
||||
"delete_before_new_snapshot": "bool?",
|
||||
"delete_ignored_after_days": "float(0,)?",
|
||||
"snapshot_name": "str?",
|
||||
"snapshot_time_of_day": "match(^[0-2]\\d:[0-5]\\d$)?",
|
||||
"specify_snapshot_folder": "bool?",
|
||||
"notify_for_stale_snapshots": "bool?",
|
||||
"enable_snapshot_stale_sensor": "bool?",
|
||||
"enable_snapshot_state_sensor": "bool?",
|
||||
"snapshot_password": "str?",
|
||||
"maximum_upload_chunk_bytes": "float(262144,)?",
|
||||
"ha_reporting_interval_seconds": "int(1,)?",
|
||||
|
||||
"upload_limit_bytes_per_second": "float(0,)?"
|
||||
},
|
||||
"ports": {
|
||||
"1627/tcp": 1627
|
||||
},
|
||||
"ports_description": {
|
||||
"1627/tcp": "Direct access to the add-on without ingress. Must be enabled in the settings, see 'expose_extra_server'."
|
||||
},
|
||||
"image": "ghcr.io/sabeechen/hassio-google-drive-backup-{arch}"
|
||||
}
|
||||
0
hassio-google-drive-backup/dev/__init__.py
Normal file
0
hassio-google-drive-backup/dev/__init__.py
Normal file
404
hassio-google-drive-backup/dev/apiingress.py
Normal file
404
hassio-google-drive-backup/dev/apiingress.py
Normal file
@@ -0,0 +1,404 @@
|
||||
from injector import singleton, inject
|
||||
import asyncio
|
||||
from ipaddress import ip_address
|
||||
from typing import Any, Dict, Union, Optional
|
||||
|
||||
import aiohttp
|
||||
from aiohttp import hdrs, web, ClientSession
|
||||
from aiohttp.web_exceptions import (
|
||||
HTTPBadGateway,
|
||||
HTTPServiceUnavailable,
|
||||
HTTPUnauthorized,
|
||||
HTTPNotFound
|
||||
)
|
||||
from multidict import CIMultiDict, istr
|
||||
|
||||
from backup.logger import getLogger
|
||||
from .ports import Ports
|
||||
from .base_server import BaseServer
|
||||
from .simulated_supervisor import SimulatedSupervisor
|
||||
|
||||
ATTR_ADMIN = "admin"
|
||||
ATTR_ENABLE = "enable"
|
||||
ATTR_ICON = "icon"
|
||||
ATTR_PANELS = "panels"
|
||||
ATTR_SESSION = "session"
|
||||
ATTR_TITLE = "title"
|
||||
COOKIE_INGRESS = "ingress_session"
|
||||
HEADER_TOKEN = "X-Supervisor-Token"
|
||||
HEADER_TOKEN_OLD = "X-Hassio-Key"
|
||||
REQUEST_FROM = "HASSIO_FROM"
|
||||
JSON_RESULT = "result"
|
||||
JSON_DATA = "data"
|
||||
JSON_MESSAGE = "message"
|
||||
RESULT_ERROR = "error"
|
||||
RESULT_OK = "ok"
|
||||
|
||||
_LOGGER = getLogger(__name__)
|
||||
|
||||
|
||||
def api_return_error(message: Optional[str] = None) -> web.Response:
|
||||
"""Return an API error message."""
|
||||
return web.json_response(
|
||||
{JSON_RESULT: RESULT_ERROR, JSON_MESSAGE: message}, status=400
|
||||
)
|
||||
|
||||
|
||||
def api_return_ok(data: Optional[Dict[str, Any]] = None) -> web.Response:
|
||||
"""Return an API ok answer."""
|
||||
return web.json_response({JSON_RESULT: RESULT_OK, JSON_DATA: data or {}})
|
||||
|
||||
|
||||
def api_process(method):
|
||||
"""Wrap function with true/false calls to rest api."""
|
||||
|
||||
async def wrap_api(api, *args, **kwargs):
|
||||
"""Return API information."""
|
||||
try:
|
||||
answer = await method(api, *args, **kwargs)
|
||||
except Exception as err:
|
||||
return api_return_error(message=str(err))
|
||||
|
||||
if isinstance(answer, dict):
|
||||
return api_return_ok(data=answer)
|
||||
if isinstance(answer, web.Response):
|
||||
return answer
|
||||
elif isinstance(answer, bool) and not answer:
|
||||
return api_return_error()
|
||||
return api_return_ok()
|
||||
|
||||
return wrap_api
|
||||
|
||||
|
||||
class Addon():
|
||||
def __init__(self, ports: Ports, token: str):
|
||||
self.ports = ports
|
||||
self.ip_address = "127.0.0.1"
|
||||
self.ingress_port = ports.ingress
|
||||
self.token = token
|
||||
|
||||
|
||||
class SysIngress():
|
||||
def __init__(self, ports: Ports, token: str, cookie_value: str):
|
||||
self.ports = ports
|
||||
self.token = token
|
||||
self.cookie_value = cookie_value
|
||||
|
||||
def validate_session(self, session):
|
||||
return session == self.cookie_value
|
||||
|
||||
def get(self, token):
|
||||
if token == self.token:
|
||||
return Addon(self.ports, self.token)
|
||||
return None
|
||||
|
||||
|
||||
class CoreSysAttributes():
|
||||
def __init__(self, ports: Ports, session: ClientSession, token: str, cookie_value: str):
|
||||
self.sys_ingress = SysIngress(ports, token, cookie_value)
|
||||
self.sys_websession = session
|
||||
|
||||
|
||||
@singleton
|
||||
class APIIngress(CoreSysAttributes, BaseServer):
|
||||
@inject
|
||||
def __init__(self, ports: Ports, session: ClientSession, supervisor: SimulatedSupervisor):
|
||||
self.addon_token = self.generateId(10)
|
||||
self.cookie_value = self.generateId(10)
|
||||
super().__init__(ports, session, self.addon_token, self.cookie_value)
|
||||
self.ports = ports
|
||||
self.supervisor = supervisor
|
||||
|
||||
def routes(self):
|
||||
return [
|
||||
web.get("/startingress", self.start_ingress),
|
||||
web.get("/hassio/ingress/{slug}", self.ingress_panel),
|
||||
web.view("/api/hassio_ingress/{token}/{path:.*}", self.handler),
|
||||
]
|
||||
|
||||
def start_ingress(self, request: web.Request):
|
||||
resp = web.Response(status=303)
|
||||
resp.headers[hdrs.LOCATION] = "/hassio/ingress/" + self.supervisor._addon_slug
|
||||
resp.set_cookie(name=COOKIE_INGRESS, value=self.cookie_value, expires="Session", domain=request.url.host, path="/api/hassio_ingress/", httponly="false", secure="false")
|
||||
return resp
|
||||
|
||||
def ingress_panel(self, request: web.Request):
|
||||
slug = request.match_info.get("slug")
|
||||
if slug != self.supervisor._addon_slug:
|
||||
raise HTTPNotFound()
|
||||
body = """
|
||||
<html>
|
||||
<head>
|
||||
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
|
||||
<meta content="utf-8" http-equiv="encoding">
|
||||
<title>Simulated Supervisor Ingress Panel</title>
|
||||
<style type="text/css" >
|
||||
iframe {{
|
||||
display: block;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
border: 0;
|
||||
}}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div>
|
||||
The Web-UI below is loaded through an iframe. <a href='startingress'>Start a new ingress session</a> if you get permission errors.
|
||||
</div>
|
||||
<iframe src="api/hassio_ingress/{0}/">
|
||||
<html>
|
||||
<head></head>
|
||||
<body></body>
|
||||
</html>
|
||||
</iframe>
|
||||
</body>
|
||||
</html>
|
||||
""".format(self.addon_token)
|
||||
resp = web.Response(body=body, content_type="text/html")
|
||||
resp.set_cookie(name=COOKIE_INGRESS, value=self.cookie_value, expires="Session", domain=request.url.host, path="/api/hassio_ingress/", httponly="false", secure="false")
|
||||
return resp
|
||||
|
||||
"""
|
||||
The class body below here is copied from
|
||||
https://github.com/home-assistant/supervisor/blob/38b0aea8e2a3b9a9614bb5d94959235a0fae235e/supervisor/api/ingress.py#L35
|
||||
In order to correctly reproduce the supervisor's kooky ingress proxy behavior.
|
||||
"""
|
||||
|
||||
def _extract_addon(self, request: web.Request) -> Addon:
|
||||
"""Return addon, throw an exception it it doesn't exist."""
|
||||
token = request.match_info.get("token")
|
||||
|
||||
# Find correct add-on
|
||||
addon = self.sys_ingress.get(token)
|
||||
if not addon:
|
||||
_LOGGER.warning("Ingress for %s not available", token)
|
||||
raise HTTPServiceUnavailable()
|
||||
|
||||
return addon
|
||||
|
||||
def _check_ha_access(self, request: web.Request) -> None:
|
||||
# always allow
|
||||
pass
|
||||
|
||||
def _create_url(self, addon: Addon, path: str) -> str:
|
||||
"""Create URL to container."""
|
||||
return f"http://{addon.ip_address}:{addon.ingress_port}/{path}"
|
||||
|
||||
@api_process
|
||||
async def panels(self, request: web.Request) -> Dict[str, Any]:
|
||||
"""Create a list of panel data."""
|
||||
addons = {}
|
||||
for addon in self.sys_ingress.addons:
|
||||
addons[addon.slug] = {
|
||||
ATTR_TITLE: addon.panel_title,
|
||||
ATTR_ICON: addon.panel_icon,
|
||||
ATTR_ADMIN: addon.panel_admin,
|
||||
ATTR_ENABLE: addon.ingress_panel,
|
||||
}
|
||||
|
||||
return {ATTR_PANELS: addons}
|
||||
|
||||
@api_process
|
||||
async def create_session(self, request: web.Request) -> Dict[str, Any]:
|
||||
"""Create a new session."""
|
||||
self._check_ha_access(request)
|
||||
|
||||
session = self.sys_ingress.create_session()
|
||||
return {ATTR_SESSION: session}
|
||||
|
||||
async def handler(
|
||||
self, request: web.Request
|
||||
) -> Union[web.Response, web.StreamResponse, web.WebSocketResponse]:
|
||||
"""Route data to Supervisor ingress service."""
|
||||
self._check_ha_access(request)
|
||||
|
||||
# Check Ingress Session
|
||||
session = request.cookies.get(COOKIE_INGRESS)
|
||||
if not self.sys_ingress.validate_session(session):
|
||||
_LOGGER.warning("No valid ingress session %s", session)
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
# Process requests
|
||||
addon = self._extract_addon(request)
|
||||
path = request.match_info.get("path")
|
||||
try:
|
||||
# Websocket
|
||||
if _is_websocket(request):
|
||||
return await self._handle_websocket(request, addon, path)
|
||||
|
||||
# Request
|
||||
return await self._handle_request(request, addon, path)
|
||||
|
||||
except aiohttp.ClientError as err:
|
||||
_LOGGER.error("Ingress error: %s", err)
|
||||
|
||||
raise HTTPBadGateway()
|
||||
|
||||
async def _handle_websocket(
|
||||
self, request: web.Request, addon: Addon, path: str
|
||||
) -> web.WebSocketResponse:
|
||||
"""Ingress route for websocket."""
|
||||
if hdrs.SEC_WEBSOCKET_PROTOCOL in request.headers:
|
||||
req_protocols = [
|
||||
str(proto.strip())
|
||||
for proto in request.headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",")
|
||||
]
|
||||
else:
|
||||
req_protocols = ()
|
||||
|
||||
ws_server = web.WebSocketResponse(
|
||||
protocols=req_protocols, autoclose=False, autoping=False
|
||||
)
|
||||
await ws_server.prepare(request)
|
||||
|
||||
# Preparing
|
||||
url = self._create_url(addon, path)
|
||||
source_header = _init_header(request, addon)
|
||||
|
||||
# Support GET query
|
||||
if request.query_string:
|
||||
url = f"{url}?{request.query_string}"
|
||||
|
||||
# Start proxy
|
||||
async with self.sys_websession.ws_connect(
|
||||
url,
|
||||
headers=source_header,
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
) as ws_client:
|
||||
# Proxy requests
|
||||
await asyncio.wait(
|
||||
[
|
||||
_websocket_forward(ws_server, ws_client),
|
||||
_websocket_forward(ws_client, ws_server),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
|
||||
return ws_server
|
||||
|
||||
async def _handle_request(
|
||||
self, request: web.Request, addon: Addon, path: str
|
||||
) -> Union[web.Response, web.StreamResponse]:
|
||||
"""Ingress route for request."""
|
||||
url = self._create_url(addon, path)
|
||||
data = await request.read()
|
||||
source_header = _init_header(request, addon)
|
||||
|
||||
async with self.sys_websession.request(
|
||||
request.method,
|
||||
url,
|
||||
headers=source_header,
|
||||
params=request.query,
|
||||
allow_redirects=False,
|
||||
data=data,
|
||||
) as result:
|
||||
headers = _response_header(result)
|
||||
|
||||
# Simple request
|
||||
if (
|
||||
hdrs.CONTENT_LENGTH in result.headers and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
|
||||
):
|
||||
# Return Response
|
||||
body = await result.read()
|
||||
|
||||
return web.Response(
|
||||
headers=headers,
|
||||
status=result.status,
|
||||
content_type=result.content_type,
|
||||
body=body,
|
||||
)
|
||||
|
||||
# Stream response
|
||||
response = web.StreamResponse(status=result.status, headers=headers)
|
||||
response.content_type = result.content_type
|
||||
|
||||
try:
|
||||
await response.prepare(request)
|
||||
async for data in result.content.iter_chunked(4096):
|
||||
await response.write(data)
|
||||
|
||||
except (
|
||||
aiohttp.ClientError,
|
||||
aiohttp.ClientPayloadError,
|
||||
ConnectionResetError,
|
||||
) as err:
|
||||
_LOGGER.error("Stream error with %s: %s", url, err)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def _init_header(
|
||||
request: web.Request, addon: str
|
||||
) -> Union[CIMultiDict, Dict[str, str]]:
|
||||
"""Create initial header."""
|
||||
headers = {}
|
||||
|
||||
# filter flags
|
||||
for name, value in request.headers.items():
|
||||
if name in (
|
||||
hdrs.CONTENT_LENGTH,
|
||||
hdrs.CONTENT_ENCODING,
|
||||
hdrs.SEC_WEBSOCKET_EXTENSIONS,
|
||||
hdrs.SEC_WEBSOCKET_PROTOCOL,
|
||||
hdrs.SEC_WEBSOCKET_VERSION,
|
||||
hdrs.SEC_WEBSOCKET_KEY,
|
||||
istr(HEADER_TOKEN),
|
||||
istr(HEADER_TOKEN_OLD),
|
||||
):
|
||||
continue
|
||||
headers[name] = value
|
||||
|
||||
# Update X-Forwarded-For
|
||||
forward_for = request.headers.get(hdrs.X_FORWARDED_FOR)
|
||||
connected_ip = ip_address(request.transport.get_extra_info("peername")[0])
|
||||
headers[hdrs.X_FORWARDED_FOR] = f"{forward_for}, {connected_ip!s}"
|
||||
|
||||
return headers
|
||||
|
||||
|
||||
def _response_header(response: aiohttp.ClientResponse) -> Dict[str, str]:
|
||||
"""Create response header."""
|
||||
headers = {}
|
||||
|
||||
for name, value in response.headers.items():
|
||||
if name in (
|
||||
hdrs.TRANSFER_ENCODING,
|
||||
hdrs.CONTENT_LENGTH,
|
||||
hdrs.CONTENT_TYPE,
|
||||
hdrs.CONTENT_ENCODING
|
||||
):
|
||||
continue
|
||||
headers[name] = value
|
||||
|
||||
return headers
|
||||
|
||||
|
||||
def _is_websocket(request: web.Request) -> bool:
|
||||
"""Return True if request is a websocket."""
|
||||
headers = request.headers
|
||||
|
||||
if (
|
||||
"upgrade" in headers.get(hdrs.CONNECTION, "").lower() and headers.get(hdrs.UPGRADE, "").lower() == "websocket"
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
async def _websocket_forward(ws_from, ws_to):
|
||||
"""Handle websocket message directly."""
|
||||
try:
|
||||
async for msg in ws_from:
|
||||
if msg.type == aiohttp.WSMsgType.TEXT:
|
||||
await ws_to.send_str(msg.data)
|
||||
elif msg.type == aiohttp.WSMsgType.BINARY:
|
||||
await ws_to.send_bytes(msg.data)
|
||||
elif msg.type == aiohttp.WSMsgType.PING:
|
||||
await ws_to.ping()
|
||||
elif msg.type == aiohttp.WSMsgType.PONG:
|
||||
await ws_to.pong()
|
||||
elif ws_to.closed:
|
||||
await ws_to.close(code=ws_to.close_code, message=msg.extra)
|
||||
except RuntimeError:
|
||||
_LOGGER.warning("Ingress Websocket runtime error")
|
||||
56
hassio-google-drive-backup/dev/base_server.py
Normal file
56
hassio-google-drive-backup/dev/base_server.py
Normal file
@@ -0,0 +1,56 @@
|
||||
import random
|
||||
import re
|
||||
import io
|
||||
from aiohttp.web import HTTPBadRequest, Request, Response
|
||||
from typing import Any
|
||||
|
||||
rangePattern = re.compile("bytes=\\d+-\\d+")
|
||||
bytesPattern = re.compile("^bytes \\d+-\\d+/\\d+$")
|
||||
intPattern = re.compile("\\d+")
|
||||
|
||||
|
||||
class BaseServer:
|
||||
def generateId(self, length: int = 30) -> str:
|
||||
random_int = random.randint(0, 1000000)
|
||||
ret = str(random_int)
|
||||
return ret + ''.join(map(lambda x: str(x), range(0, length - len(ret))))
|
||||
|
||||
def timeToRfc3339String(self, time) -> str:
|
||||
return time.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
def serve_bytes(self, request: Request, bytes: bytearray, include_length: bool = True) -> Any:
|
||||
if "Range" in request.headers:
|
||||
# Do range request
|
||||
if not rangePattern.match(request.headers['Range']):
|
||||
raise HTTPBadRequest()
|
||||
|
||||
numbers = intPattern.findall(request.headers['Range'])
|
||||
start = int(numbers[0])
|
||||
end = int(numbers[1])
|
||||
|
||||
if start < 0:
|
||||
raise HTTPBadRequest()
|
||||
if start > end:
|
||||
raise HTTPBadRequest()
|
||||
if end > len(bytes) - 1:
|
||||
raise HTTPBadRequest()
|
||||
resp = Response(body=bytes[start:end + 1], status=206)
|
||||
resp.headers['Content-Range'] = "bytes {0}-{1}/{2}".format(
|
||||
start, end, len(bytes))
|
||||
if include_length:
|
||||
resp.headers["Content-length"] = str(len(bytes))
|
||||
return resp
|
||||
else:
|
||||
resp = Response(body=io.BytesIO(bytes))
|
||||
resp.headers["Content-length"] = str(len(bytes))
|
||||
return resp
|
||||
|
||||
async def readAll(self, request):
|
||||
data = bytearray()
|
||||
content = request.content
|
||||
while True:
|
||||
chunk, done = await content.readchunk()
|
||||
data.extend(chunk)
|
||||
if len(chunk) == 0:
|
||||
break
|
||||
return data
|
||||
@@ -0,0 +1,3 @@
|
||||
authorization_host: "https://dev.habackup.io"
|
||||
token_server_hosts: "https://token1.dev.habackup.io,https://dev.habackup.io"
|
||||
default_drive_client_id: "795575624694-jcdhoh1jr1ngccfsbi2f44arr4jupl79.apps.googleusercontent.com"
|
||||
27
hassio-google-drive-backup/dev/data/dev_options.json
Normal file
27
hassio-google-drive-backup/dev/data/dev_options.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"drive_url": "http://localhost:56153",
|
||||
"supervisor_url": "http://localhost:56153/",
|
||||
"hassio_header": "test_header",
|
||||
"retained_file_path": "hassio-google-drive-backup/dev/data/retained.json",
|
||||
"data_cache_file_path": "hassio-google-drive-backup/dev/data/data_cache.json",
|
||||
"backup_directory_path": "hassio-google-drive-backup/dev/backup",
|
||||
"certfile": "hassio-google-drive-backup/dev/ssl/fullchain.pem",
|
||||
"keyfile": "hassio-google-drive-backup/dev/ssl/privkey.pem",
|
||||
"secrets_file_path": "hassio-google-drive-backup/dev/data/secrets.yaml",
|
||||
"credentials_file_path": "hassio-google-drive-backup/dev/data/credentials.dat",
|
||||
"folder_file_path": "hassio-google-drive-backup/dev/data/folder.dat",
|
||||
"id_file_path": "hassio-google-drive-backup/dev/data/id.json",
|
||||
"stop_addon_state_path": "hassio-google-drive-backup/dev/data/stop_addon_state.json",
|
||||
"authorization_host": "http://localhost:56153",
|
||||
"token_server_hosts": "http://localhost:56153",
|
||||
"drive_refresh_url": "http://localhost:56153/oauth2/v4/token",
|
||||
"drive_authorize_url": "http://localhost:56153/o/oauth2/v2/auth",
|
||||
"drive_device_code_url": "http://localhost:56153/device/code",
|
||||
"drive_token_url": "http://localhost:56153/token",
|
||||
"ingress_token_file_path": "hassio-google-drive-backup/dev/data/ingress.dat",
|
||||
"log_level": "TRACE",
|
||||
"console_log_level": "TRACE",
|
||||
"ingress_port": 56152,
|
||||
"port": 56151,
|
||||
"cache_warmup_max_seconds": 300
|
||||
}
|
||||
20
hassio-google-drive-backup/dev/data/drive_dev_options.json
Normal file
20
hassio-google-drive-backup/dev/data/drive_dev_options.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"supervisor_url": "http://localhost:56153/",
|
||||
"authorization_host": "https://dev.habackup.io",
|
||||
"token_server_hosts": "https://token1.dev.habackup.io,https://dev.habackup.io",
|
||||
"hassio_header": "test_header",
|
||||
"data_cache_file_path": "hassio-google-drive-backup/dev/data/data_cache.json",
|
||||
"retained_file_path": "hassio-google-drive-backup/dev/data/retained.json",
|
||||
"backup_directory_path": "hassio-google-drive-backup/dev/backup",
|
||||
"certfile": "hassio-google-drive-backup/dev/ssl/fullchain.pem",
|
||||
"keyfile": "hassio-google-drive-backup/dev/ssl/privkey.pem",
|
||||
"secrets_file_path": "hassio-google-drive-backup/dev/data/secrets.yaml",
|
||||
"credentials_file_path": "hassio-google-drive-backup/dev/data/credentials.dat",
|
||||
"folder_file_path": "hassio-google-drive-backup/dev/data/folder.dat",
|
||||
"id_file_path": "hassio-google-drive-backup/dev/data/id.json",
|
||||
"stop_addon_state_path": "hassio-google-drive-backup/dev/data/stop_addon_state.json",
|
||||
"ingress_token_file_path": "hassio-google-drive-backup/dev/data/ingress.dat",
|
||||
"default_drive_client_id": "795575624694-jcdhoh1jr1ngccfsbi2f44arr4jupl79.apps.googleusercontent.com",
|
||||
"ingress_port": 56152,
|
||||
"port": 56151
|
||||
}
|
||||
17
hassio-google-drive-backup/dev/data/drive_options.json
Normal file
17
hassio-google-drive-backup/dev/data/drive_options.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"supervisor_url": "http://localhost:56153/",
|
||||
"hassio_header": "test_header",
|
||||
"data_cache_file_path": "hassio-google-drive-backup/dev/data/data_cache.json",
|
||||
"retained_file_path": "hassio-google-drive-backup/dev/data/retained.json",
|
||||
"backup_directory_path": "hassio-google-drive-backup/dev/backup",
|
||||
"certfile": "hassio-google-drive-backup/dev/ssl/fullchain.pem",
|
||||
"keyfile": "hassio-google-drive-backup/dev/ssl/privkey.pem",
|
||||
"secrets_file_path": "hassio-google-drive-backup/dev/data/secrets.yaml",
|
||||
"credentials_file_path": "hassio-google-drive-backup/dev/data/credentials.dat",
|
||||
"folder_file_path": "hassio-google-drive-backup/dev/data/folder.dat",
|
||||
"ingress_token_file_path": "hassio-google-drive-backup/dev/data/ingress.dat",
|
||||
"id_file_path": "hassio-google-drive-backup/dev/data/id.json",
|
||||
"stop_addon_state_path": "hassio-google-drive-backup/dev/data/stop_addon_state.json",
|
||||
"ingress_port": 56155,
|
||||
"port": 56156
|
||||
}
|
||||
11
hassio-google-drive-backup/dev/data/options.json
Normal file
11
hassio-google-drive-backup/dev/data/options.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"send_error_reports": true,
|
||||
"max_backups_in_ha": 4,
|
||||
"max_backups_in_google_drive": 3,
|
||||
"days_between_backups": 10,
|
||||
"use_ssl": false,
|
||||
"backup_name": "{type} Backup {year}-{month}-{day} {hr24}:{min}:{sec}",
|
||||
"backup_password": "!secret password1",
|
||||
"drive_experimental": true,
|
||||
"drive_ipv4": ""
|
||||
}
|
||||
2
hassio-google-drive-backup/dev/data/secrets.yaml
Normal file
2
hassio-google-drive-backup/dev/data/secrets.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
password1: "Test value"
|
||||
for_unit_tests: "password value"
|
||||
6
hassio-google-drive-backup/dev/deploy.sh
Executable file
6
hassio-google-drive-backup/dev/deploy.sh
Executable file
@@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
sudo docker run --rm --privileged \
|
||||
-v /home/coder/.docker:/root/.docker \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v ..:/data \
|
||||
homeassistant/amd64-builder --all -t /data
|
||||
19
hassio-google-drive-backup/dev/deploy_addon.py
Normal file
19
hassio-google-drive-backup/dev/deploy_addon.py
Normal file
@@ -0,0 +1,19 @@
|
||||
import subprocess
|
||||
import os
|
||||
import json
|
||||
from os.path import abspath, join
|
||||
|
||||
with open(abspath(join(__file__, "..", "..", "config.json"))) as f:
|
||||
version = json.load(f)["version"]
|
||||
print("Version will be: " + version)
|
||||
subprocess.run("docker login", shell=True)
|
||||
|
||||
|
||||
platforms = ["amd64", "armv7", "aarch64", "armhf", "i386"]
|
||||
|
||||
os.chdir("hassio-google-drive-backup")
|
||||
for platform in platforms:
|
||||
subprocess.run("docker build -f Dockerfile-addon -t sabeechen/hassio-google-drive-backup-{0}:{1} --build-arg BUILD_FROM=homeassistant/{0}-base .".format(platform, version), shell=True)
|
||||
|
||||
for platform in platforms:
|
||||
subprocess.run("docker push sabeechen/hassio-google-drive-backup-{0}:{1}".format(platform, version), shell=True)
|
||||
20
hassio-google-drive-backup/dev/deploy_dev_addon.py
Normal file
20
hassio-google-drive-backup/dev/deploy_dev_addon.py
Normal file
@@ -0,0 +1,20 @@
|
||||
import getpass
|
||||
import subprocess
|
||||
import os
|
||||
import json
|
||||
from os.path import abspath, join
|
||||
|
||||
with open(abspath(join(__file__, "..", "..", "config.json"))) as f:
|
||||
version = json.load(f)["version"]
|
||||
|
||||
try:
|
||||
p = getpass.getpass("Enter DockerHub Password")
|
||||
except Exception as error:
|
||||
print('ERROR', error)
|
||||
exit()
|
||||
|
||||
os.chdir("hassio-google-drive-backup")
|
||||
print("Setting the appropriate gcloud project...")
|
||||
subprocess.run("gcloud config set project hassio-drive-backup", shell=True)
|
||||
print("Building and uploading dev container...")
|
||||
subprocess.run("gcloud builds submit --config cloudbuild-dev.yaml --substitutions _DOCKERHUB_PASSWORD={0},_VERSION={1}".format(p, version), shell=True)
|
||||
8
hassio-google-drive-backup/dev/deploy_dev_server.py
Normal file
8
hassio-google-drive-backup/dev/deploy_dev_server.py
Normal file
@@ -0,0 +1,8 @@
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
os.chdir("hassio-google-drive-backup")
|
||||
print("Setting the appropriate gcloud project...")
|
||||
subprocess.run("gcloud config set project hassio-drive-backup-dev", shell=True)
|
||||
print("Building and uploading server container...")
|
||||
subprocess.run("gcloud builds submit --config cloudbuild-server.yaml", shell=True)
|
||||
8
hassio-google-drive-backup/dev/deploy_server.py
Normal file
8
hassio-google-drive-backup/dev/deploy_server.py
Normal file
@@ -0,0 +1,8 @@
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
os.chdir("hassio-google-drive-backup")
|
||||
print("Setting the appropriate gcloud project...")
|
||||
subprocess.run("gcloud config set project hassio-drive-backup", shell=True)
|
||||
print("Building and uploading server container...")
|
||||
subprocess.run("gcloud builds submit --config cloudbuild-server.yaml", shell=True)
|
||||
57
hassio-google-drive-backup/dev/error_tools.py
Normal file
57
hassio-google-drive-backup/dev/error_tools.py
Normal file
@@ -0,0 +1,57 @@
|
||||
import argparse
|
||||
from google.cloud import firestore
|
||||
from datetime import datetime, timedelta
|
||||
DELETE_BATCH_SIZE = 200
|
||||
STORE_NAME = "error_reports"
|
||||
|
||||
|
||||
def delete_old_data():
|
||||
# Initialize Firestore
|
||||
db = firestore.Client()
|
||||
collection_ref = db.collection(STORE_NAME)
|
||||
|
||||
# Define the datetime for one week ago
|
||||
week_ago = datetime.now() - timedelta(days=7)
|
||||
|
||||
# Query to find all documents older than a week
|
||||
total_deleted = 0
|
||||
while True:
|
||||
to_delete = 0
|
||||
batch = db.batch()
|
||||
docs = collection_ref.where('server_time', '<', week_ago).stream()
|
||||
for doc in docs:
|
||||
to_delete += 1
|
||||
batch.delete(doc.reference)
|
||||
if to_delete >= DELETE_BATCH_SIZE:
|
||||
break
|
||||
if to_delete > 0:
|
||||
batch.commit()
|
||||
total_deleted += to_delete
|
||||
print(f"Deleted {to_delete} documents ({total_deleted} total)")
|
||||
else:
|
||||
break
|
||||
print(f"Success: All documents older than a week deleted ({total_deleted} total)")
|
||||
|
||||
|
||||
def main():
|
||||
# Create command line argument parser
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
# Add purge argument
|
||||
parser.add_argument("--purge", help="Delete all documents older than a week.", action="store_true")
|
||||
|
||||
# Add any other argument you want in future. For example:
|
||||
# parser.add_argument("--future_arg", help="Perform some future operation.")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Respond to arguments
|
||||
if args.purge:
|
||||
confirm = input('Are you sure you want to delete all documents older than a week? (y/n): ')
|
||||
if confirm.lower() == 'y':
|
||||
delete_old_data()
|
||||
else:
|
||||
print("Abort: No documents were deleted.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
6
hassio-google-drive-backup/dev/http_exception.py
Normal file
6
hassio-google-drive-backup/dev/http_exception.py
Normal file
@@ -0,0 +1,6 @@
|
||||
from aiohttp.web import HTTPClientError
|
||||
|
||||
|
||||
class HttpMultiException(HTTPClientError):
|
||||
def __init__(self, code):
|
||||
self.status_code = code
|
||||
5
hassio-google-drive-backup/dev/ports.py
Normal file
5
hassio-google-drive-backup/dev/ports.py
Normal file
@@ -0,0 +1,5 @@
|
||||
class Ports:
|
||||
def __init__(self, server, ui, ingress):
|
||||
self.server = server
|
||||
self.ui = ui
|
||||
self.ingress = ingress
|
||||
136
hassio-google-drive-backup/dev/request_interceptor.py
Normal file
136
hassio-google-drive-backup/dev/request_interceptor.py
Normal file
@@ -0,0 +1,136 @@
|
||||
import re
|
||||
from aiohttp.web import Request, Response
|
||||
from asyncio import Event
|
||||
from aiohttp.web_response import json_response
|
||||
from injector import singleton, inject
|
||||
from backup.time import Time
|
||||
from typing import List
|
||||
|
||||
|
||||
class UrlMatch():
|
||||
def __init__(self, time: Time, url, fail_after=None, status=None, response=None, wait=False, sleep=None, fail_for=None):
|
||||
self.time = time
|
||||
self.url: str = url
|
||||
self.fail_after: int = fail_after
|
||||
self.status: int = status
|
||||
self.wait_event: Event = Event()
|
||||
self.trigger_event: Event = Event()
|
||||
self.response: str = ""
|
||||
self.wait: bool = wait
|
||||
self.trigger_event.clear()
|
||||
self.wait_event.clear()
|
||||
self.sleep = sleep
|
||||
self.response = response
|
||||
self.fail_for = fail_for
|
||||
self.responses = []
|
||||
self._calls = 0
|
||||
self.time = time
|
||||
|
||||
def addResponse(self, response):
|
||||
self.responses.append(response)
|
||||
|
||||
def stop(self):
|
||||
self.wait_event.set()
|
||||
self.trigger_event.set()
|
||||
|
||||
def isMatch(self, request):
|
||||
return re.match(self.url, request.url.path) or re.match(self.url, str(request.url))
|
||||
|
||||
async def waitForCall(self):
|
||||
await self.trigger_event.wait()
|
||||
|
||||
def clear(self):
|
||||
self.wait_event.set()
|
||||
|
||||
def callCount(self):
|
||||
return self._calls
|
||||
|
||||
async def _doAction(self, request: Request):
|
||||
self._calls += 1
|
||||
if len(self.responses) > 0:
|
||||
return self.responses.pop(0)
|
||||
if self.status is not None:
|
||||
await self._readAll(request)
|
||||
if self.response:
|
||||
return json_response(self.response, status=self.status)
|
||||
else:
|
||||
return Response(status=self.status)
|
||||
elif self.wait:
|
||||
self.trigger_event.set()
|
||||
await self.wait_event.wait()
|
||||
elif self.sleep is not None:
|
||||
await self.time.sleepAsync(self.sleep, early_exit=self.wait_event)
|
||||
|
||||
async def called(self, request: Request):
|
||||
if self.fail_after is None or self.fail_after <= 0:
|
||||
if self.fail_for is not None and self.fail_for > 0:
|
||||
self.fail_for -= 1
|
||||
return await self._doAction(request)
|
||||
elif self.fail_for is not None:
|
||||
return None
|
||||
|
||||
return await self._doAction(request)
|
||||
elif self.fail_after is not None:
|
||||
self.fail_after -= 1
|
||||
|
||||
async def _readAll(self, request: Request):
|
||||
data = bytearray()
|
||||
content = request.content
|
||||
while True:
|
||||
chunk, done = await content.readchunk()
|
||||
data.extend(chunk)
|
||||
if len(chunk) == 0:
|
||||
break
|
||||
return data
|
||||
|
||||
|
||||
@singleton
|
||||
class RequestInterceptor:
|
||||
@inject
|
||||
def __init__(self):
|
||||
self._matchers: List[UrlMatch] = []
|
||||
self._history = []
|
||||
self.time = Time()
|
||||
|
||||
def stop(self):
|
||||
for matcher in self._matchers:
|
||||
matcher.stop()
|
||||
|
||||
def setError(self, url, status=None, fail_after=None, fail_for=None, response=None) -> UrlMatch:
|
||||
matcher = UrlMatch(self.time, url, fail_after, status=status, response=response, fail_for=fail_for)
|
||||
self._matchers.append(matcher)
|
||||
return matcher
|
||||
|
||||
def clear(self):
|
||||
self._matchers.clear()
|
||||
self._history.clear()
|
||||
|
||||
def setWaiter(self, url, attempts=None):
|
||||
matcher = UrlMatch(self.time, url, attempts, wait=True)
|
||||
self._matchers.append(matcher)
|
||||
return matcher
|
||||
|
||||
def setSleep(self, url, attempts=None, sleep=None, wait_for=None):
|
||||
matcher = UrlMatch(self.time, url, attempts, sleep=sleep, fail_for=wait_for)
|
||||
self._matchers.append(matcher)
|
||||
return matcher
|
||||
|
||||
async def checkUrl(self, request):
|
||||
ret = None
|
||||
self.record(request)
|
||||
for match in self._matchers:
|
||||
if match.isMatch(request):
|
||||
ret = await match.called(request)
|
||||
return ret
|
||||
|
||||
def record(self, request: Request):
|
||||
record = str(request.url.path)
|
||||
if len(request.url.query_string) > 0:
|
||||
record += "?" + str(request.url.query_string)
|
||||
self._history.append(record)
|
||||
|
||||
def urlWasCalled(self, url) -> bool:
|
||||
for called_url in self._history:
|
||||
if url == called_url or re.match(url, called_url):
|
||||
return True
|
||||
return False
|
||||
522
hassio-google-drive-backup/dev/simulated_google.py
Normal file
522
hassio-google-drive-backup/dev/simulated_google.py
Normal file
@@ -0,0 +1,522 @@
|
||||
import re
|
||||
|
||||
from yarl import URL
|
||||
from datetime import timedelta
|
||||
from backup.logger import getLogger
|
||||
from backup.config import Setting, Config
|
||||
from backup.time import Time
|
||||
from backup.creds import KEY_CLIENT_SECRET, KEY_CLIENT_ID, KEY_ACCESS_TOKEN, KEY_TOKEN_EXPIRY
|
||||
from aiohttp.web import (HTTPBadRequest, HTTPNotFound,
|
||||
HTTPUnauthorized, Request, Response, delete, get,
|
||||
json_response, patch, post, put, HTTPSeeOther)
|
||||
from injector import inject, singleton
|
||||
from .base_server import BaseServer, bytesPattern, intPattern
|
||||
from .ports import Ports
|
||||
from typing import Any, Dict
|
||||
from asyncio import Event
|
||||
from backup.creds import Creds
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
mimeTypeQueryPattern = re.compile("^mimeType='.*'$")
|
||||
parentsQueryPattern = re.compile("^'.*' in parents$")
|
||||
resumeBytesPattern = re.compile("^bytes \\*/\\d+$")
|
||||
|
||||
URL_MATCH_DRIVE_API = "^.*drive.*$"
|
||||
URL_MATCH_UPLOAD = "^/upload/drive/v3/files/$"
|
||||
URL_MATCH_UPLOAD_PROGRESS = "^/upload/drive/v3/files/progress/.*$"
|
||||
URL_MATCH_CREATE = "^/upload/drive/v3/files/progress/.*$"
|
||||
URL_MATCH_FILE = "^/drive/v3/files/.*$"
|
||||
URL_MATCH_DEVICE_CODE = "^/device/code$"
|
||||
URL_MATCH_TOKEN = "^/token$"
|
||||
|
||||
|
||||
@singleton
|
||||
class SimulatedGoogle(BaseServer):
|
||||
@inject
|
||||
def __init__(self, config: Config, time: Time, ports: Ports):
|
||||
self._time = time
|
||||
self.config = config
|
||||
|
||||
# auth state
|
||||
self._custom_drive_client_id = self.generateId(5)
|
||||
self._custom_drive_client_secret = self.generateId(5)
|
||||
self._custom_drive_client_expiration = None
|
||||
self._drive_auth_code = "drive_auth_code"
|
||||
self._port = ports.server
|
||||
self._auth_token = ""
|
||||
self._refresh_token = "test_refresh_token"
|
||||
self._client_id_hack = None
|
||||
|
||||
# Drive item states
|
||||
self.items = {}
|
||||
self.lostPermission = []
|
||||
self.space_available = 5 * 1024 * 1024 * 1024
|
||||
self.usage = 0
|
||||
|
||||
# Upload state information
|
||||
self._upload_info: Dict[str, Any] = {}
|
||||
self.chunks = []
|
||||
self._upload_chunk_wait = Event()
|
||||
self._upload_chunk_trigger = Event()
|
||||
self._current_chunk = 1
|
||||
self._waitOnChunk = 0
|
||||
self.device_auth_params = {}
|
||||
self._device_code_accepted = None
|
||||
|
||||
def setDriveSpaceAvailable(self, bytes_available):
|
||||
self.space_available = bytes_available
|
||||
|
||||
def generateNewAccessToken(self):
|
||||
new_token = self.generateId(20)
|
||||
self._auth_token = new_token
|
||||
|
||||
def generateNewRefreshToken(self):
|
||||
new_token = self.generateId(20)
|
||||
self._refresh_token = new_token
|
||||
|
||||
def expireCreds(self):
|
||||
self.generateNewAccessToken()
|
||||
self.generateNewRefreshToken()
|
||||
|
||||
def expireRefreshToken(self):
|
||||
self.generateNewRefreshToken()
|
||||
|
||||
def resetDriveAuth(self):
|
||||
self.expireCreds()
|
||||
self.config.override(Setting.DEFAULT_DRIVE_CLIENT_ID, self.generateId(5))
|
||||
self.config.override(Setting.DEFAULT_DRIVE_CLIENT_SECRET, self.generateId(5))
|
||||
|
||||
def creds(self):
|
||||
return Creds(self._time,
|
||||
id=self.config.get(Setting.DEFAULT_DRIVE_CLIENT_ID),
|
||||
expiration=self._time.now() + timedelta(hours=1),
|
||||
access_token=self._auth_token,
|
||||
refresh_token=self._refresh_token)
|
||||
|
||||
def routes(self):
|
||||
return [
|
||||
put('/upload/drive/v3/files/progress/{id}', self._uploadProgress),
|
||||
post('/upload/drive/v3/files/', self._upload),
|
||||
post('/drive/v3/files/', self._create),
|
||||
get('/drive/v3/files/', self._query),
|
||||
delete('/drive/v3/files/{id}/', self._delete),
|
||||
patch('/drive/v3/files/{id}/', self._update),
|
||||
get('/drive/v3/files/{id}/', self._get),
|
||||
post('/oauth2/v4/token', self._oauth2Token),
|
||||
get('/o/oauth2/v2/auth', self._oAuth2Authorize),
|
||||
get('/drive/customcreds', self._getCustomCred),
|
||||
get('/drive/v3/about', self._driveAbout),
|
||||
post('/device/code', self._deviceCode),
|
||||
get('/device', self._device),
|
||||
get('/debug/google', self._debug),
|
||||
post('/token', self._driveToken),
|
||||
]
|
||||
|
||||
async def _debug(self, request: Request):
|
||||
return json_response({
|
||||
"custom_drive_client_id": self._custom_drive_client_id,
|
||||
"custom_drive_client_secret": self._custom_drive_client_secret,
|
||||
"device_auth_params": self.device_auth_params
|
||||
})
|
||||
|
||||
async def _checkDriveHeaders(self, request: Request):
|
||||
if request.headers.get("Authorization", "") != "Bearer " + self._auth_token:
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
async def _deviceCode(self, request: Request):
|
||||
params = await request.post()
|
||||
client_id = params['client_id']
|
||||
scope = params['scope']
|
||||
if client_id != self._custom_drive_client_id or scope != 'https://www.googleapis.com/auth/drive.file':
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
self.device_auth_params = {
|
||||
'device_code': self.generateId(10),
|
||||
'expires_in': 60,
|
||||
'interval': 1,
|
||||
'user_code': self.generateId(8),
|
||||
'verification_url': str(URL("http://localhost").with_port(self._port).with_path("device"))
|
||||
}
|
||||
self._device_code_accepted = None
|
||||
return json_response(self.device_auth_params)
|
||||
|
||||
async def _device(self, request: Request):
|
||||
code = request.query.get('code')
|
||||
if code:
|
||||
if self.device_auth_params.get('user_code', "dfsdfsdfsdfs") == code:
|
||||
body = "Accepted"
|
||||
self._device_code_accepted = True
|
||||
self.generateNewRefreshToken()
|
||||
self.generateNewAccessToken()
|
||||
else:
|
||||
body = "Wrong code"
|
||||
else:
|
||||
body = """
|
||||
<html>
|
||||
<head>
|
||||
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
|
||||
<meta content="utf-8" http-equiv="encoding">
|
||||
<title>Simulated Drive Device Authorization</title>
|
||||
</head>
|
||||
<body>
|
||||
<div>
|
||||
Enter the device code provided below
|
||||
</div>
|
||||
<form>
|
||||
<label for="code">Device Code:</label><br>
|
||||
<input type="text" value="Device Code" id="code" name="code">
|
||||
<input type="submit" value="Submit">
|
||||
</form>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
resp = Response(body=body, content_type="text/html")
|
||||
return resp
|
||||
|
||||
async def _oAuth2Authorize(self, request: Request):
|
||||
query = request.query
|
||||
if query.get('client_id') != self.config.get(Setting.DEFAULT_DRIVE_CLIENT_ID) and query.get('client_id') != self._custom_drive_client_id:
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('scope') != 'https://www.googleapis.com/auth/drive.file':
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('response_type') != 'code':
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('include_granted_scopes') != 'true':
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('access_type') != 'offline':
|
||||
raise HTTPUnauthorized()
|
||||
if 'state' not in query:
|
||||
raise HTTPUnauthorized()
|
||||
if 'redirect_uri' not in query:
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('prompt') != 'consent':
|
||||
raise HTTPUnauthorized()
|
||||
if query.get('redirect_uri') == 'urn:ietf:wg:oauth:2.0:oob':
|
||||
return json_response({"code": self._drive_auth_code})
|
||||
url = URL(query.get('redirect_uri')).with_query({'code': self._drive_auth_code, 'state': query.get('state')})
|
||||
raise HTTPSeeOther(str(url))
|
||||
|
||||
async def _getCustomCred(self, request: Request):
|
||||
return json_response({
|
||||
"client_id": self._custom_drive_client_id,
|
||||
"client_secret": self._custom_drive_client_secret
|
||||
})
|
||||
|
||||
async def _driveToken(self, request: Request):
|
||||
data = await request.post()
|
||||
if not self._checkClientIdandSecret(data.get('client_id'), data.get('client_secret')):
|
||||
raise HTTPUnauthorized()
|
||||
if data.get('grant_type') == 'authorization_code':
|
||||
if data.get('redirect_uri') not in ["http://localhost:{}/drive/authorize".format(self._port), 'urn:ietf:wg:oauth:2.0:oob']:
|
||||
raise HTTPUnauthorized()
|
||||
if data.get('code') != self._drive_auth_code:
|
||||
raise HTTPUnauthorized()
|
||||
elif data.get('grant_type') == 'urn:ietf:params:oauth:grant-type:device_code':
|
||||
if data.get('device_code') != self.device_auth_params['device_code']:
|
||||
raise HTTPUnauthorized()
|
||||
if self._device_code_accepted is None:
|
||||
return json_response({
|
||||
"error": "authorization_pending",
|
||||
"error_description": "Precondition Required"
|
||||
}, status=428)
|
||||
elif self._device_code_accepted is False:
|
||||
raise HTTPUnauthorized()
|
||||
else:
|
||||
raise HTTPBadRequest()
|
||||
self.generateNewRefreshToken()
|
||||
resp = {
|
||||
'access_token': self._auth_token,
|
||||
'refresh_token': self._refresh_token,
|
||||
KEY_CLIENT_ID: data.get('client_id'),
|
||||
KEY_CLIENT_SECRET: self.config.get(Setting.DEFAULT_DRIVE_CLIENT_SECRET),
|
||||
KEY_TOKEN_EXPIRY: self.timeToRfc3339String(self._time.now()),
|
||||
}
|
||||
if self._custom_drive_client_expiration is not None:
|
||||
resp[KEY_TOKEN_EXPIRY] = self.timeToRfc3339String(self._custom_drive_client_expiration)
|
||||
return json_response(resp)
|
||||
|
||||
def _checkClientIdandSecret(self, client_id: str, client_secret: str) -> bool:
|
||||
if self._custom_drive_client_id == client_id and self._custom_drive_client_secret == client_secret:
|
||||
return True
|
||||
if client_id == self.config.get(Setting.DEFAULT_DRIVE_CLIENT_ID) == client_id and client_secret == self.config.get(Setting.DEFAULT_DRIVE_CLIENT_SECRET):
|
||||
return True
|
||||
|
||||
if self._client_id_hack is not None:
|
||||
if client_id == self._client_id_hack and client_secret == self.config.get(Setting.DEFAULT_DRIVE_CLIENT_SECRET):
|
||||
return True
|
||||
return False
|
||||
|
||||
async def _oauth2Token(self, request: Request):
|
||||
params = await request.post()
|
||||
if not self._checkClientIdandSecret(params['client_id'], params['client_secret']):
|
||||
raise HTTPUnauthorized()
|
||||
if params['refresh_token'] != self._refresh_token:
|
||||
raise HTTPUnauthorized()
|
||||
if params['grant_type'] == 'refresh_token':
|
||||
self.generateNewAccessToken()
|
||||
return json_response({
|
||||
'access_token': self._auth_token,
|
||||
'expires_in': 3600,
|
||||
'token_type': 'doesn\'t matter'
|
||||
})
|
||||
elif params['grant_type'] == 'urn:ietf:params:oauth:grant-type:device_code':
|
||||
if params['device_code'] != self.device_auth_params['device_code']:
|
||||
raise HTTPUnauthorized()
|
||||
if not self._device_code_accepted:
|
||||
return json_response({
|
||||
"error": "authorization_pending",
|
||||
"error_description": "Precondition Required"
|
||||
}, status=428)
|
||||
return json_response({
|
||||
'access_token': self._auth_token,
|
||||
'expires_in': 3600,
|
||||
'token_type': 'doesn\'t matter'
|
||||
})
|
||||
else:
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
def filter_fields(self, item: Dict[str, Any], fields) -> Dict[str, Any]:
|
||||
ret = {}
|
||||
for field in fields:
|
||||
if field in item:
|
||||
ret[field] = item[field]
|
||||
return ret
|
||||
|
||||
def parseFields(self, source: str):
|
||||
fields = []
|
||||
for field in source.split(","):
|
||||
if field.startswith("files("):
|
||||
fields.append(field[6:])
|
||||
elif field.endswith(")"):
|
||||
fields.append(field[:-1])
|
||||
else:
|
||||
fields.append(field)
|
||||
return fields
|
||||
|
||||
def formatItem(self, base, id):
|
||||
caps = base.get('capabilites', {})
|
||||
if 'capabilities' not in base:
|
||||
base['capabilities'] = caps
|
||||
if 'canAddChildren' not in caps:
|
||||
caps['canAddChildren'] = True
|
||||
if 'canListChildren' not in caps:
|
||||
caps['canListChildren'] = True
|
||||
if 'canDeleteChildren' not in caps:
|
||||
caps['canDeleteChildren'] = True
|
||||
if 'canTrashChildren' not in caps:
|
||||
caps['canTrashChildren'] = True
|
||||
if 'canTrash' not in caps:
|
||||
caps['canTrash'] = True
|
||||
if 'canDelete' not in caps:
|
||||
caps['canDelete'] = True
|
||||
|
||||
for parent in base.get("parents", []):
|
||||
parent_item = self.items[parent]
|
||||
# This simulates a very simply shared drive permissions structure
|
||||
if parent_item.get("driveId", None) is not None:
|
||||
base["driveId"] = parent_item["driveId"]
|
||||
base["capabilities"] = parent_item["capabilities"]
|
||||
base['trashed'] = False
|
||||
base['id'] = id
|
||||
base['modifiedTime'] = self.timeToRfc3339String(self._time.now())
|
||||
return base
|
||||
|
||||
async def _get(self, request: Request):
|
||||
id = request.match_info.get('id')
|
||||
await self._checkDriveHeaders(request)
|
||||
if id not in self.items:
|
||||
raise HTTPNotFound()
|
||||
if id in self.lostPermission:
|
||||
return Response(
|
||||
status=403,
|
||||
content_type="application/json",
|
||||
text='{"error": {"errors": [{"reason": "forbidden"}]}}')
|
||||
request_type = request.query.get("alt", "metadata")
|
||||
if request_type == "media":
|
||||
# return bytes
|
||||
item = self.items[id]
|
||||
if 'bytes' not in item:
|
||||
raise HTTPBadRequest()
|
||||
return self.serve_bytes(request, item['bytes'], include_length=False)
|
||||
else:
|
||||
fields = request.query.get("fields", "id").split(",")
|
||||
return json_response(self.filter_fields(self.items[id], fields))
|
||||
|
||||
async def _update(self, request: Request):
|
||||
id = request.match_info.get('id')
|
||||
await self._checkDriveHeaders(request)
|
||||
if id not in self.items:
|
||||
return HTTPNotFound
|
||||
update = await request.json()
|
||||
for key in update:
|
||||
if key in self.items[id] and isinstance(self.items[id][key], dict):
|
||||
self.items[id][key].update(update[key])
|
||||
else:
|
||||
self.items[id][key] = update[key]
|
||||
return Response()
|
||||
|
||||
async def _driveAbout(self, request: Request):
|
||||
return json_response({
|
||||
'storageQuota': {
|
||||
'usage': self.usage,
|
||||
'limit': self.space_available
|
||||
},
|
||||
'user': {
|
||||
'emailAddress': "testing@no.where"
|
||||
}
|
||||
})
|
||||
|
||||
async def _delete(self, request: Request):
|
||||
id = request.match_info.get('id')
|
||||
await self._checkDriveHeaders(request)
|
||||
if id not in self.items:
|
||||
raise HTTPNotFound()
|
||||
del self.items[id]
|
||||
return Response()
|
||||
|
||||
async def _query(self, request: Request):
|
||||
await self._checkDriveHeaders(request)
|
||||
query: str = request.query.get("q", "")
|
||||
fields = self.parseFields(request.query.get('fields', 'id'))
|
||||
if mimeTypeQueryPattern.match(query):
|
||||
ret = []
|
||||
mimeType = query[len("mimeType='"):-1]
|
||||
for item in self.items.values():
|
||||
if item.get('mimeType', '') == mimeType:
|
||||
ret.append(self.filter_fields(item, fields))
|
||||
return json_response({'files': ret})
|
||||
elif parentsQueryPattern.match(query):
|
||||
ret = []
|
||||
parent = query[1:-len("' in parents")]
|
||||
if parent not in self.items:
|
||||
raise HTTPNotFound()
|
||||
if parent in self.lostPermission:
|
||||
return Response(
|
||||
status=403,
|
||||
content_type="application/json",
|
||||
text='{"error": {"errors": [{"reason": "forbidden"}]}}')
|
||||
for item in self.items.values():
|
||||
if parent in item.get('parents', []):
|
||||
ret.append(self.filter_fields(item, fields))
|
||||
return json_response({'files': ret})
|
||||
elif len(query) == 0:
|
||||
ret = []
|
||||
for item in self.items.values():
|
||||
ret.append(self.filter_fields(item, fields))
|
||||
return json_response({'files': ret})
|
||||
else:
|
||||
raise HTTPBadRequest
|
||||
|
||||
async def _create(self, request: Request):
|
||||
await self._checkDriveHeaders(request)
|
||||
item = self.formatItem(await request.json(), self.generateId(30))
|
||||
self.items[item['id']] = item
|
||||
return json_response({'id': item['id']})
|
||||
|
||||
async def _upload(self, request: Request):
|
||||
logger.info("Drive start upload request")
|
||||
await self._checkDriveHeaders(request)
|
||||
if request.query.get('uploadType') != 'resumable':
|
||||
raise HTTPBadRequest()
|
||||
mimeType = request.headers.get('X-Upload-Content-Type', None)
|
||||
if mimeType is None:
|
||||
raise HTTPBadRequest()
|
||||
size = int(request.headers.get('X-Upload-Content-Length', -1))
|
||||
if size < 0:
|
||||
raise HTTPBadRequest()
|
||||
total_size = 0
|
||||
for item in self.items.values():
|
||||
total_size += item.get('size', 0)
|
||||
total_size += size
|
||||
if total_size > self.space_available:
|
||||
return json_response({
|
||||
"error": {
|
||||
"errors": [
|
||||
{"reason": "storageQuotaExceeded"}
|
||||
]
|
||||
}
|
||||
}, status=400)
|
||||
metadata = await request.json()
|
||||
id = self.generateId()
|
||||
|
||||
# Validate parents
|
||||
if 'parents' in metadata:
|
||||
for parent in metadata['parents']:
|
||||
if parent not in self.items:
|
||||
raise HTTPNotFound()
|
||||
if parent in self.lostPermission:
|
||||
return Response(status=403, content_type="application/json", text='{"error": {"errors": [{"reason": "forbidden"}]}}')
|
||||
self._upload_info['size'] = size
|
||||
self._upload_info['mime'] = mimeType
|
||||
self._upload_info['item'] = self.formatItem(metadata, id)
|
||||
self._upload_info['id'] = id
|
||||
self._upload_info['next_start'] = 0
|
||||
metadata['bytes'] = bytearray()
|
||||
metadata['size'] = size
|
||||
resp = Response()
|
||||
resp.headers['Location'] = "http://localhost:" + \
|
||||
str(self._port) + "/upload/drive/v3/files/progress/" + id
|
||||
return resp
|
||||
|
||||
async def _uploadProgress(self, request: Request):
|
||||
if self._waitOnChunk > 0:
|
||||
if self._current_chunk == self._waitOnChunk:
|
||||
self._upload_chunk_trigger.set()
|
||||
await self._upload_chunk_wait.wait()
|
||||
else:
|
||||
self._current_chunk += 1
|
||||
id = request.match_info.get('id')
|
||||
await self._checkDriveHeaders(request)
|
||||
if self._upload_info.get('id', "") != id:
|
||||
raise HTTPBadRequest()
|
||||
chunk_size = int(request.headers['Content-Length'])
|
||||
info = request.headers['Content-Range']
|
||||
if resumeBytesPattern.match(info):
|
||||
resp = Response(status=308)
|
||||
if self._upload_info['next_start'] != 0:
|
||||
resp.headers['Range'] = "bytes=0-{0}".format(self._upload_info['next_start'] - 1)
|
||||
return resp
|
||||
if not bytesPattern.match(info):
|
||||
raise HTTPBadRequest()
|
||||
numbers = intPattern.findall(info)
|
||||
start = int(numbers[0])
|
||||
end = int(numbers[1])
|
||||
total = int(numbers[2])
|
||||
if total != self._upload_info['size']:
|
||||
raise HTTPBadRequest()
|
||||
if start != self._upload_info['next_start']:
|
||||
raise HTTPBadRequest()
|
||||
if not (end == total - 1 or chunk_size % (256 * 1024) == 0):
|
||||
raise HTTPBadRequest()
|
||||
if end > total - 1:
|
||||
raise HTTPBadRequest()
|
||||
|
||||
# get the chunk
|
||||
received_bytes = await self.readAll(request)
|
||||
|
||||
# validate the chunk
|
||||
if len(received_bytes) != chunk_size:
|
||||
raise HTTPBadRequest()
|
||||
|
||||
if len(received_bytes) != end - start + 1:
|
||||
raise HTTPBadRequest()
|
||||
|
||||
self._upload_info['item']['bytes'].extend(received_bytes)
|
||||
|
||||
if len(self._upload_info['item']['bytes']) != end + 1:
|
||||
raise HTTPBadRequest()
|
||||
self.usage += len(received_bytes)
|
||||
self.chunks.append(len(received_bytes))
|
||||
if end == total - 1:
|
||||
# upload is complete, so create the item
|
||||
completed = self.formatItem(self._upload_info['item'], self._upload_info['id'])
|
||||
self.items[completed['id']] = completed
|
||||
return json_response({"id": completed['id']})
|
||||
else:
|
||||
# Return an incomplete response
|
||||
# For some reason, the tests like to stop right here
|
||||
resp = Response(status=308)
|
||||
self._upload_info['next_start'] = end + 1
|
||||
resp.headers['Range'] = "bytes=0-{0}".format(end)
|
||||
return resp
|
||||
459
hassio-google-drive-backup/dev/simulated_supervisor.py
Normal file
459
hassio-google-drive-backup/dev/simulated_supervisor.py
Normal file
@@ -0,0 +1,459 @@
|
||||
import asyncio
|
||||
from asyncio.tasks import sleep
|
||||
from datetime import timedelta
|
||||
import random
|
||||
import string
|
||||
import io
|
||||
|
||||
from backup.config import Config, Version
|
||||
from backup.time import Time
|
||||
from aiohttp.web import (HTTPBadRequest, HTTPNotFound,
|
||||
HTTPUnauthorized, Request, Response, get,
|
||||
json_response, post, delete, FileResponse)
|
||||
from injector import inject, singleton
|
||||
from .base_server import BaseServer
|
||||
from .ports import Ports
|
||||
from typing import Any, Dict
|
||||
from tests.helpers import all_addons, createBackupTar, parseBackupInfo
|
||||
|
||||
URL_MATCH_BACKUP_FULL = "^/backups/new/full$"
|
||||
URL_MATCH_BACKUP_DELETE = "^/backups/.*$"
|
||||
URL_MATCH_BACKUP_DOWNLOAD = "^/backups/.*/download$"
|
||||
URL_MATCH_MISC_INFO = "^/info$"
|
||||
URL_MATCH_CORE_API = "^/core/api.*$"
|
||||
URL_MATCH_START_ADDON = "^/addons/.*/start$"
|
||||
URL_MATCH_STOP_ADDON = "^/addons/.*/stop$"
|
||||
URL_MATCH_ADDON_INFO = "^/addons/.*/info$"
|
||||
URL_MATCH_SELF_OPTIONS = "^/addons/self/options$"
|
||||
|
||||
URL_MATCH_SNAPSHOT = "^/snapshots.*$"
|
||||
URL_MATCH_BACKUPS = "^/backups.*$"
|
||||
URL_MATCH_MOUNT = "^/mounts*$"
|
||||
|
||||
|
||||
@singleton
|
||||
class SimulatedSupervisor(BaseServer):
|
||||
@inject
|
||||
def __init__(self, config: Config, ports: Ports, time: Time):
|
||||
self._config = config
|
||||
self._time = time
|
||||
self._ports = ports
|
||||
self._auth_token = "test_header"
|
||||
self._backups: Dict[str, Any] = {}
|
||||
self._backup_data: Dict[str, bytearray] = {}
|
||||
self._backup_lock = asyncio.Lock()
|
||||
self._backup_inner_lock = asyncio.Lock()
|
||||
self._entities = {}
|
||||
self._events = []
|
||||
self._attributes = {}
|
||||
self._notification = None
|
||||
self._min_backup_size = 1024 * 1024 * 5
|
||||
self._max_backup_size = 1024 * 1024 * 5
|
||||
self._addon_slug = "self_slug"
|
||||
self._options = self.defaultOptions()
|
||||
self._username = "user"
|
||||
self._password = "pass"
|
||||
self._addons = all_addons.copy()
|
||||
self._super_version = Version(2023, 7)
|
||||
self._mounts = {
|
||||
'default_backup_mount': None,
|
||||
'mounts': [
|
||||
{
|
||||
"name": "my_media_share",
|
||||
"usage": "media",
|
||||
"type": "cifs",
|
||||
"server": "server.local",
|
||||
"share": "media",
|
||||
"state": "active"
|
||||
},
|
||||
{
|
||||
"name": "my_backup_share",
|
||||
"usage": "backup",
|
||||
"type": "nfs",
|
||||
"server": "server.local",
|
||||
"share": "media",
|
||||
"state": "active"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
self.installAddon(self._addon_slug, "Home Assistant Google drive Backup")
|
||||
self.installAddon("42", "The answer")
|
||||
self.installAddon("sgadg", "sdgsagsdgsggsd")
|
||||
|
||||
def defaultOptions(self):
|
||||
return {
|
||||
"max_backups_in_ha": 4,
|
||||
"max_backups_in_google_drive": 4,
|
||||
"days_between_backups": 3
|
||||
}
|
||||
|
||||
def routes(self):
|
||||
return [
|
||||
post('/addons/{slug}/options', self._updateOptions),
|
||||
post("/core/api/services/persistent_notification/dismiss", self._dismissNotification),
|
||||
post("/core/api/services/persistent_notification/create", self._createNotification),
|
||||
post("/core/api/events/{name}", self._haEventUpdate),
|
||||
post("/core/api/states/{entity}", self._haStateUpdate),
|
||||
post('/auth', self._authenticate),
|
||||
get('/auth', self._authenticate),
|
||||
get('/info', self._miscInfo),
|
||||
get('/addons/self/info', self._selfInfo),
|
||||
get('/addons', self._allAddons),
|
||||
get('/addons/{slug}/info', self._addonInfo),
|
||||
|
||||
post('/addons/{slug}/start', self._startAddon),
|
||||
post('/addons/{slug}/stop', self._stopAddon),
|
||||
get('/addons/{slug}/logo', self._logoAddon),
|
||||
get('/addons/{slug}/icon', self._logoAddon),
|
||||
|
||||
get('/core/info', self._coreInfo),
|
||||
get('/supervisor/info', self._supervisorInfo),
|
||||
get('/supervisor/logs', self._supervisorLogs),
|
||||
get('/core/logs', self._coreLogs),
|
||||
get('/debug/insert/backup', self._debug_insert_backup),
|
||||
get('/debug/info', self._debugInfo),
|
||||
post("/debug/mounts", self._setMounts),
|
||||
|
||||
get('/backups', self._getBackups),
|
||||
get('/mounts', self._getMounts),
|
||||
delete('/backups/{slug}', self._deletebackup),
|
||||
post('/backups/new/upload', self._uploadbackup),
|
||||
post('/backups/new/partial', self._newbackup),
|
||||
post('/backups/new/full', self._newbackup),
|
||||
get('/backups/new/full', self._newbackup),
|
||||
get('/backups/{slug}/download', self._backupDownload),
|
||||
get('/backups/{slug}/info', self._backupDetail),
|
||||
get('/debug/backups/lock', self._lock_backups),
|
||||
|
||||
# TODO: remove once the api path is fully deprecated
|
||||
get('/snapshots', self._getSnapshots),
|
||||
post('/snapshots/{slug}/remove', self._deletebackup),
|
||||
post('/snapshots/new/upload', self._uploadbackup),
|
||||
post('/snapshots/new/partial', self._newbackup),
|
||||
post('/snapshots/new/full', self._newbackup),
|
||||
get('/snapshots/new/full', self._newbackup),
|
||||
get('/snapshots/{slug}/download', self._backupDownload),
|
||||
get('/snapshots/{slug}/info', self._backupDetail),
|
||||
]
|
||||
|
||||
def getEvents(self):
|
||||
return self._events.copy()
|
||||
|
||||
def getEntity(self, entity):
|
||||
return self._entities.get(entity)
|
||||
|
||||
def clearEntities(self):
|
||||
self._entities = {}
|
||||
|
||||
def addon(self, slug):
|
||||
for addon in self._addons:
|
||||
if addon["slug"] == slug:
|
||||
return addon
|
||||
return None
|
||||
|
||||
def getAttributes(self, attribute):
|
||||
return self._attributes.get(attribute)
|
||||
|
||||
def getNotification(self):
|
||||
return self._notification
|
||||
|
||||
def _formatErrorResponse(self, error: str) -> str:
|
||||
return json_response({'result': error})
|
||||
|
||||
def _formatDataResponse(self, data: Any) -> Response:
|
||||
return json_response({'result': 'ok', 'data': data})
|
||||
|
||||
async def toggleBlockBackup(self):
|
||||
if self._backup_lock.locked():
|
||||
self._backup_lock.release()
|
||||
else:
|
||||
await self._backup_lock.acquire()
|
||||
|
||||
async def _verifyHeader(self, request) -> bool:
|
||||
if request.headers.get("Authorization", None) == "Bearer " + self._auth_token:
|
||||
return
|
||||
if request.headers.get("X-Supervisor-Token", None) == self._auth_token:
|
||||
return
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
async def _getSnapshots(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse({'snapshots': list(self._backups.values())})
|
||||
|
||||
async def _getBackups(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse({'backups': list(self._backups.values())})
|
||||
|
||||
async def _getMounts(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse(self._mounts)
|
||||
|
||||
async def _setMounts(self, request: Request):
|
||||
self._mounts = await request.json()
|
||||
return self._formatDataResponse({})
|
||||
|
||||
async def _stopAddon(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
for addon in self._addons:
|
||||
if addon.get("slug", "") == slug:
|
||||
if addon.get("state") == "started":
|
||||
addon["state"] = "stopped"
|
||||
return self._formatDataResponse({})
|
||||
raise HTTPBadRequest()
|
||||
|
||||
async def _logoAddon(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return FileResponse('hassio-google-drive-backup/backup/static/images/logo.png')
|
||||
|
||||
async def _startAddon(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
for addon in self._addons:
|
||||
if addon.get("slug", "") == slug:
|
||||
if addon.get("state") != "started":
|
||||
addon["state"] = "started"
|
||||
return self._formatDataResponse({})
|
||||
raise HTTPBadRequest()
|
||||
|
||||
async def _addonInfo(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
for addon in self._addons:
|
||||
if addon.get("slug", "") == slug:
|
||||
return self._formatDataResponse({
|
||||
'boot': addon.get("boot"),
|
||||
'watchdog': addon.get("watchdog"),
|
||||
'state': addon.get("state"),
|
||||
})
|
||||
raise HTTPBadRequest()
|
||||
|
||||
async def _supervisorInfo(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse(
|
||||
{
|
||||
'version': str(self._super_version)
|
||||
}
|
||||
)
|
||||
|
||||
async def _allAddons(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse(
|
||||
{
|
||||
"addons": list(self._addons).copy()
|
||||
}
|
||||
)
|
||||
|
||||
async def _supervisorLogs(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return Response(body=self.generate_random_text(20, 10, 20))
|
||||
|
||||
def generate_random_text(self, line_count, min_words=5, max_words=10):
|
||||
lines = []
|
||||
log_levels = ["WARN", "WARNING", "INFO", "ERROR", "DEBUG"]
|
||||
for _ in range(line_count):
|
||||
level = random.choice(log_levels)
|
||||
word_count = random.randint(min_words, max_words)
|
||||
words = [random.choice(string.ascii_lowercase) for _ in range(word_count)]
|
||||
line = level + " " + ' '.join(''.join(random.choices(string.ascii_lowercase + string.digits, k=random.randint(3, 10))) for _ in words)
|
||||
lines.append(line)
|
||||
return '\n'.join(lines)
|
||||
|
||||
async def _coreLogs(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return Response(body="Core Log line 1\nCore Log Line 2")
|
||||
|
||||
async def _coreInfo(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse(
|
||||
{
|
||||
"version": "1.3.3.7",
|
||||
"last_version": "1.3.3.8",
|
||||
"machine": "VS Dev",
|
||||
"ip_address": "127.0.0.1",
|
||||
"arch": "x86",
|
||||
"image": "image",
|
||||
"custom": "false",
|
||||
"boot": "true",
|
||||
"port": self._ports.server,
|
||||
"ssl": "false",
|
||||
"watchdog": "what is this",
|
||||
"wait_boot": "so many arguments"
|
||||
}
|
||||
)
|
||||
|
||||
async def _internalNewBackup(self, request: Request, input_json, date=None, verify_header=True) -> str:
|
||||
async with self._backup_lock:
|
||||
async with self._backup_inner_lock:
|
||||
if 'wait' in input_json:
|
||||
await sleep(input_json['wait'])
|
||||
if verify_header:
|
||||
await self._verifyHeader(request)
|
||||
slug = self.generateId(8)
|
||||
password = input_json.get('password', None)
|
||||
data = createBackupTar(
|
||||
slug,
|
||||
input_json.get('name', "Default name"),
|
||||
date=date or self._time.now(),
|
||||
padSize=int(random.uniform(self._min_backup_size, self._max_backup_size)),
|
||||
included_folders=input_json.get('folders', None),
|
||||
included_addons=input_json.get('addons', None),
|
||||
password=password)
|
||||
backup_info = parseBackupInfo(data)
|
||||
self._backups[slug] = backup_info
|
||||
self._backup_data[slug] = bytearray(data.getbuffer())
|
||||
return slug
|
||||
|
||||
async def createBackup(self, input_json, date=None):
|
||||
return await self._internalNewBackup(None, input_json, date=date, verify_header=False)
|
||||
|
||||
async def _newbackup(self, request: Request):
|
||||
if self._backup_lock.locked():
|
||||
raise HTTPBadRequest()
|
||||
input_json = await request.json()
|
||||
task = asyncio.shield(asyncio.create_task(self._internalNewBackup(request, input_json)))
|
||||
return self._formatDataResponse({"slug": await task})
|
||||
|
||||
async def _lock_backups(self, request: Request):
|
||||
await self._backup_lock.acquire()
|
||||
return self._formatDataResponse({"message": "locked"})
|
||||
|
||||
async def _uploadbackup(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
try:
|
||||
reader = await request.multipart()
|
||||
contents = await reader.next()
|
||||
received_bytes = bytearray()
|
||||
while True:
|
||||
chunk = await contents.read_chunk()
|
||||
if not chunk:
|
||||
break
|
||||
received_bytes.extend(chunk)
|
||||
info = parseBackupInfo(io.BytesIO(received_bytes))
|
||||
self._backups[info['slug']] = info
|
||||
self._backup_data[info['slug']] = received_bytes
|
||||
return self._formatDataResponse({"slug": info['slug']})
|
||||
except Exception as e:
|
||||
print(str(e))
|
||||
return self._formatErrorResponse("Bad backup")
|
||||
|
||||
async def _deletebackup(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
if slug not in self._backups:
|
||||
raise HTTPNotFound()
|
||||
del self._backups[slug]
|
||||
del self._backup_data[slug]
|
||||
return self._formatDataResponse("deleted")
|
||||
|
||||
async def _backupDetail(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
if slug not in self._backups:
|
||||
raise HTTPNotFound()
|
||||
return self._formatDataResponse(self._backups[slug])
|
||||
|
||||
async def _backupDownload(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
slug = request.match_info.get('slug')
|
||||
if slug not in self._backup_data:
|
||||
raise HTTPNotFound()
|
||||
return self.serve_bytes(request, self._backup_data[slug])
|
||||
|
||||
async def _selfInfo(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse({
|
||||
"webui": "http://some/address",
|
||||
'ingress_url': "fill me in later",
|
||||
"slug": self._addon_slug,
|
||||
"options": self._options
|
||||
})
|
||||
|
||||
async def _debugInfo(self, request: Request):
|
||||
return self._formatDataResponse({
|
||||
"config": {
|
||||
" webui": "http://some/address",
|
||||
'ingress_url': "fill me in later",
|
||||
"slug": self._addon_slug,
|
||||
"options": self._options
|
||||
}
|
||||
})
|
||||
|
||||
async def _miscInfo(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
return self._formatDataResponse({
|
||||
"supervisor": "super version",
|
||||
"homeassistant": "ha version",
|
||||
"hassos": "hassos version",
|
||||
"hostname": "hostname",
|
||||
"machine": "machine",
|
||||
"arch": "Arch",
|
||||
"supported_arch": "supported arch",
|
||||
"channel": "channel"
|
||||
})
|
||||
|
||||
def installAddon(self, slug, name, version="v1.0", boot=True, started=True):
|
||||
self._addons.append({
|
||||
"name": 'Name for ' + name,
|
||||
"slug": slug,
|
||||
"description": slug + " description",
|
||||
"version": version,
|
||||
"watchdog": False,
|
||||
"boot": "auto" if boot else "manual",
|
||||
"logo": True,
|
||||
"ingress_entry": "/api/hassio_ingress/" + slug,
|
||||
"state": "started" if started else "stopped"
|
||||
})
|
||||
|
||||
async def _authenticate(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
input_json = await request.json()
|
||||
if input_json.get("username") != self._username or input_json.get("password") != self._password:
|
||||
raise HTTPBadRequest()
|
||||
return self._formatDataResponse({})
|
||||
|
||||
async def _updateOptions(self, request: Request):
|
||||
slug = request.match_info.get('slug')
|
||||
|
||||
if slug == "self":
|
||||
await self._verifyHeader(request)
|
||||
self._options = (await request.json())['options'].copy()
|
||||
else:
|
||||
self.addon(slug).update(await request.json())
|
||||
return self._formatDataResponse({})
|
||||
|
||||
async def _haStateUpdate(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
entity = request.match_info.get('entity')
|
||||
json = await request.json()
|
||||
self._entities[entity] = json['state']
|
||||
self._attributes[entity] = json['attributes']
|
||||
return Response()
|
||||
|
||||
async def _haEventUpdate(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
name = request.match_info.get('name')
|
||||
self._events.append((name, await request.json()))
|
||||
return Response()
|
||||
|
||||
async def _createNotification(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
notification = await request.json()
|
||||
print("Created notification with: {}".format(notification))
|
||||
self._notification = notification.copy()
|
||||
return Response()
|
||||
|
||||
async def _dismissNotification(self, request: Request):
|
||||
await self._verifyHeader(request)
|
||||
print("Dismissed notification with: {}".format(await request.json()))
|
||||
self._notification = None
|
||||
return Response()
|
||||
|
||||
async def _debug_insert_backup(self, request: Request) -> Response:
|
||||
days_back = int(request.query.get("days"))
|
||||
date = self._time.now() - timedelta(days=days_back)
|
||||
name = date.strftime("Full Backup %Y-%m-%d %H:%M-%S")
|
||||
wait = int(request.query.get("wait", 0))
|
||||
slug = await self._internalNewBackup(request, {'name': name, 'wait': wait}, date=date, verify_header=False)
|
||||
return self._formatDataResponse({'slug': slug})
|
||||
165
hassio-google-drive-backup/dev/simulationserver.py
Normal file
165
hassio-google-drive-backup/dev/simulationserver.py
Normal file
@@ -0,0 +1,165 @@
|
||||
import re
|
||||
from typing import Dict
|
||||
from yarl import URL
|
||||
import aiohttp
|
||||
from aiohttp.web import (Application,
|
||||
HTTPException,
|
||||
Request, Response, get,
|
||||
json_response, middleware, post, HTTPSeeOther)
|
||||
from aiohttp.client import ClientSession
|
||||
from injector import inject, singleton, Injector, provider
|
||||
|
||||
from backup.time import Time
|
||||
from backup.logger import getLogger
|
||||
from backup.server import Server
|
||||
from tests.faketime import FakeTime
|
||||
from backup.module import BaseModule
|
||||
from backup.config import Config, Setting
|
||||
from .http_exception import HttpMultiException
|
||||
from .simulated_google import SimulatedGoogle
|
||||
from .base_server import BaseServer
|
||||
from .ports import Ports
|
||||
from .request_interceptor import RequestInterceptor
|
||||
from .simulated_supervisor import SimulatedSupervisor
|
||||
from .apiingress import APIIngress
|
||||
import aiorun
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
mimeTypeQueryPattern = re.compile("^mimeType='.*'$")
|
||||
parentsQueryPattern = re.compile("^'.*' in parents$")
|
||||
bytesPattern = re.compile("^bytes \\d+-\\d+/\\d+$")
|
||||
resumeBytesPattern = re.compile("^bytes \\*/\\d+$")
|
||||
intPattern = re.compile("\\d+")
|
||||
rangePattern = re.compile("bytes=\\d+-\\d+")
|
||||
|
||||
|
||||
@singleton
|
||||
class SimulationServer(BaseServer):
|
||||
@inject
|
||||
def __init__(self, ports: Ports, time: Time, session: ClientSession, authserver: Server, config: Config, google: SimulatedGoogle, supervisor: SimulatedSupervisor, api_ingress: APIIngress, interceptor: RequestInterceptor):
|
||||
self.interceptor = interceptor
|
||||
self.google = google
|
||||
self.supervisor = supervisor
|
||||
self.config = config
|
||||
self.id_counter = 0
|
||||
self.files: Dict[str, bytearray] = {}
|
||||
self._port = ports.server
|
||||
self._time: FakeTime = time
|
||||
self.urls = []
|
||||
self.relative = True
|
||||
self._authserver = authserver
|
||||
self._api_ingress = api_ingress
|
||||
|
||||
def wasUrlRequested(self, pattern):
|
||||
for url in self.urls:
|
||||
if pattern in url:
|
||||
return True
|
||||
return False
|
||||
|
||||
def blockBackups(self):
|
||||
self.block_backups = True
|
||||
|
||||
def unBlockBackups(self):
|
||||
self.block_backups = False
|
||||
|
||||
async def uploadfile(self, request: Request):
|
||||
name: str = str(request.query.get("name", "test"))
|
||||
self.files[name] = await self.readAll(request)
|
||||
return Response(text="")
|
||||
|
||||
async def readFile(self, request: Request):
|
||||
return self.serve_bytes(request, self.files[request.query.get("name", "test")])
|
||||
|
||||
async def slugRedirect(self, request: Request):
|
||||
raise HTTPSeeOther("https://localhost:" + str(self.config.get(Setting.INGRESS_PORT)))
|
||||
|
||||
@middleware
|
||||
async def error_middleware(self, request: Request, handler):
|
||||
self.urls.append(str(request.url))
|
||||
resp = await self.interceptor.checkUrl(request)
|
||||
if resp is not None:
|
||||
return resp
|
||||
try:
|
||||
resp = await handler(request)
|
||||
return resp
|
||||
except Exception as ex:
|
||||
await self.readAll(request)
|
||||
if isinstance(ex, HttpMultiException):
|
||||
return Response(status=ex.status_code)
|
||||
elif isinstance(ex, HTTPException):
|
||||
raise
|
||||
else:
|
||||
logger.printException(ex)
|
||||
return json_response(str(ex), status=500)
|
||||
|
||||
def createApp(self):
|
||||
app = Application(middlewares=[self.error_middleware])
|
||||
app.add_routes(self.routes())
|
||||
self._authserver.buildApp(app)
|
||||
return app
|
||||
|
||||
async def start(self, port):
|
||||
self.runner = aiohttp.web.AppRunner(self.createApp())
|
||||
await self.runner.setup()
|
||||
site = aiohttp.web.TCPSite(self.runner, "0.0.0.0", port=port)
|
||||
await site.start()
|
||||
|
||||
async def stop(self):
|
||||
self.interceptor.stop()
|
||||
await self.runner.shutdown()
|
||||
await self.runner.cleanup()
|
||||
|
||||
def routes(self):
|
||||
return [
|
||||
get('/readfile', self.readFile),
|
||||
post('/uploadfile', self.uploadfile),
|
||||
get('/ingress/self_slug', self.slugRedirect),
|
||||
get('/debug/config', self.debug_config)
|
||||
] + self.google.routes() + self.supervisor.routes() + self._api_ingress.routes()
|
||||
|
||||
async def debug_config(self, request: Request):
|
||||
return json_response(self.supervisor._options)
|
||||
|
||||
|
||||
class SimServerModule(BaseModule):
|
||||
def __init__(self, base_url: URL):
|
||||
super().__init__(override_dns=False)
|
||||
self._base_url = base_url
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getConfig(self) -> Config:
|
||||
return Config.withOverrides({
|
||||
Setting.DRIVE_AUTHORIZE_URL: str(self._base_url.with_path("o/oauth2/v2/auth")),
|
||||
Setting.AUTHORIZATION_HOST: str(self._base_url),
|
||||
Setting.TOKEN_SERVER_HOSTS: str(self._base_url),
|
||||
Setting.DRIVE_TOKEN_URL: str(self._base_url.with_path("token")),
|
||||
Setting.DRIVE_DEVICE_CODE_URL: str(self._base_url.with_path("device/code")),
|
||||
Setting.DRIVE_REFRESH_URL: str(self._base_url.with_path("oauth2/v4/token")),
|
||||
Setting.INGRESS_PORT: 56152
|
||||
})
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getPorts(self) -> Ports:
|
||||
return Ports(56153, 56151, 56152)
|
||||
|
||||
|
||||
async def main():
|
||||
port = 56153
|
||||
base = URL("http://localhost").with_port(port)
|
||||
injector = Injector(SimServerModule(base))
|
||||
server = injector.get(SimulationServer)
|
||||
|
||||
# start the server
|
||||
runner = aiohttp.web.AppRunner(server.createApp())
|
||||
await runner.setup()
|
||||
site = aiohttp.web.TCPSite(runner, "0.0.0.0", port=port)
|
||||
await site.start()
|
||||
print("Server started on port " + str(port))
|
||||
print("Open a browser at http://localhost:" + str(port))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
aiorun.run(main())
|
||||
18
hassio-google-drive-backup/dev/ssl/fullchain.pem
Normal file
18
hassio-google-drive-backup/dev/ssl/fullchain.pem
Normal file
@@ -0,0 +1,18 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIC5TCCAc2gAwIBAgIJAN+M1w1AVtigMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV
|
||||
BAMMCWxvY2FsaG9zdDAeFw0xOTAzMjYwMzI2MDJaFw0xOTA0MjUwMzI2MDJaMBQx
|
||||
EjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
|
||||
ggEBANAa2QE9uHexG6b/ggk7muXB4AhEcpPU+eqGmp4kFx/cKTYe+rPfui4FbARa
|
||||
QyajXrVRMukEs0wZpUJ11LeGOmuTJ1Cu6mKtk4ub35ZrTfY0W0YdTW0ASYifDNQZ
|
||||
pt4S0HAcY9A6wlorADxqDkqBt3cSuXdDaR6wFhc4x2kN7xMcKgX5Exv6AS04ksLm
|
||||
fu0JNSvY1PcLQOA8bFc8tm4eEQcF51xBJBchCcXwpsr5OXt33govGcgxEPLZIueO
|
||||
nmzzbF0jWBzBhwmjGGnEVsHnxgTG59QshFuB2xf6uWuZolLaPg32b2CV4gomFbn1
|
||||
7j4JMFTlxw80OkWILLR6pMr1gy0CAwEAAaM6MDgwFAYDVR0RBA0wC4IJbG9jYWxo
|
||||
b3N0MAsGA1UdDwQEAwIHgDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0B
|
||||
AQsFAAOCAQEAeK7VMbYO1lQmQcNIG/X42sS5Dm/YFSKgXG0VNMwjEa0xOPS54a6P
|
||||
a3n7Lb6cVgwSstCSkQa0/Paqy/OvoJlvvgSrV8ZkqwU7100d7gohrReMAhWbRRDK
|
||||
GkiJDUUQLAT8DXLRry2r5zRDaHX8OzzQuF8dPbFVkjXv9EMpBISY0hmodQFxBmiK
|
||||
hxiYQWDcNQOTLwRk/x/b61AFLSXduonWM3r+29e8ej7LEHh9UJeLFF7S0+8t+7W4
|
||||
F8j8rGWFjYa2KCUFgTOWSg1cUnKYqFaakcMQAlfcXCzuDOso/gwuVFeZZ1hY7gEQ
|
||||
OHJt0Tu+PWE4CQ3118AIajj2pxTuEHc6Ow==
|
||||
-----END CERTIFICATE-----
|
||||
19
hassio-google-drive-backup/dev/ssl/localhost-ca-bundle.csr
Normal file
19
hassio-google-drive-backup/dev/ssl/localhost-ca-bundle.csr
Normal file
@@ -0,0 +1,19 @@
|
||||
-----BEGIN CERTIFICATE REQUEST-----
|
||||
MIIDAjCCAeoCAQAwgaIxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDTzETMBEGA1UE
|
||||
BwwKU291dGggUGFyazEYMBYGA1UECgwPVW5pdCBUZXN0cyBJbmMuMR4wHAYDVQQL
|
||||
DBVUZXN0aW5nIERlcHQuIEkgZ3Vlc3MxEjAQBgNVBAMMCWxvY2FsaG9zdDEjMCEG
|
||||
CSqGSIb3DQEJARYUc3RlcGhlbkBiZWVjaGVucy5jb20wggEiMA0GCSqGSIb3DQEB
|
||||
AQUAA4IBDwAwggEKAoIBAQDCu0+68ol5a9ShDmeg41INbwR0QdG0khlzA54Yhu3t
|
||||
yhEYv7H1XE5JKwSENc1YkBTMlnmbEySW+YMpRXy6R/GoCaNU2wnz6UCdkJQQf6l+
|
||||
xIAkaRB+tj7uPpz65olC6tx5CFD+je/A6ZrHzAoEhiKTsQhI5uxexnl191BIQvcj
|
||||
u7qKaN+TXmvKGlixPrYp4T30EWMDsbONyNjcZr/C4Xs1SzicfscDKt8qiINP8Fgd
|
||||
tBDxyPIa4deYVKHG/1le9L1ccPFy1+wSQQG3d4YED7h94ajc5chmjMkJnTTYlRKL
|
||||
XwMZxcsqX9ngHhPvoB5ZahGOLtjyYpxrvduY4kQ8XSaxAgMBAAGgGjAYBgkqhkiG
|
||||
9w0BCQcxCwwJY2hhbGxlbmdlMA0GCSqGSIb3DQEBCwUAA4IBAQCT+ZSEvz9mJhMA
|
||||
v71WWd+QjTyT4+9SItLVK3EAcpPbbJWayCuD+mKCGQr5plixC3w+tjy4coIG8lUo
|
||||
pCX8sXi7TKMVKw6LYvBJeaRRAJ2+exeAQWJvGtRBBohXzm2+SxJ5Zp5+XEY7L3o8
|
||||
Apk++px7kLQTSRZxFAQ/irL/cUrp5Sn33ago+bzGA2AGryrqfBbe/nCwlCGF6cV2
|
||||
2w9oqY38tPeHQK9+MLOWDE0mBZvu+ab1mpTR7hxFVaVIKOBf8BifSVc4qJ8CDS+l
|
||||
N4vEnxHIGdTXVp6yjpWN86qidjbLBqS6ZvY1dw6uFuXWSZP7gRixJi4/NUCf0NSO
|
||||
yd+jFL0b
|
||||
-----END CERTIFICATE REQUEST-----
|
||||
18
hassio-google-drive-backup/dev/ssl/localhost.crt
Normal file
18
hassio-google-drive-backup/dev/ssl/localhost.crt
Normal file
@@ -0,0 +1,18 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIC8DCCAdigAwIBAgIUUOqXw4hsjBcEzJwlO1o9TYw+f+wwDQYJKoZIhvcNAQEL
|
||||
BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTIwMDIwMzA4MDYyNVoXDTIwMDMw
|
||||
NDA4MDYyNVowFDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEF
|
||||
AAOCAQ8AMIIBCgKCAQEAwrtPuvKJeWvUoQ5noONSDW8EdEHRtJIZcwOeGIbt7coR
|
||||
GL+x9VxOSSsEhDXNWJAUzJZ5mxMklvmDKUV8ukfxqAmjVNsJ8+lAnZCUEH+pfsSA
|
||||
JGkQfrY+7j6c+uaJQurceQhQ/o3vwOmax8wKBIYik7EISObsXsZ5dfdQSEL3I7u6
|
||||
imjfk15ryhpYsT62KeE99BFjA7GzjcjY3Ga/wuF7NUs4nH7HAyrfKoiDT/BYHbQQ
|
||||
8cjyGuHXmFShxv9ZXvS9XHDxctfsEkEBt3eGBA+4feGo3OXIZozJCZ002JUSi18D
|
||||
GcXLKl/Z4B4T76AeWWoRji7Y8mKca73bmOJEPF0msQIDAQABozowODAUBgNVHREE
|
||||
DTALgglsb2NhbGhvc3QwCwYDVR0PBAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUFBwMB
|
||||
MA0GCSqGSIb3DQEBCwUAA4IBAQBsZ29ZHTO6yNGPKWpxfOG38Z+mk6eh6TpbIVze
|
||||
b7L2cFr/ONEFyz9hnS3kf23S9VsoX0AMdqYZbGmUT/4+d9+Q8hRXv7W3zenUk4KY
|
||||
SkMfvB3J27w2l9Zx7oYfonBC7SSbfYrCBHgZwsINzdP5aC2q6eFTOadIdcF2bxf9
|
||||
FU/4aUyOeCkHAtYkVyxM3F33Qmf7ym7OZYKLn4SrPLFRSYiWRd8w+ww75uinnS5W
|
||||
bG96OojPYzIZu8rb3b5ISR2BMWP0JVQRdmV+8TG1ekaA6EB5gAven55OxCmIUAJm
|
||||
UEOLPRtVvJN0SE1S6jZBXBHler7IRDKpxATXbdFBK01s4rDz
|
||||
-----END CERTIFICATE-----
|
||||
28
hassio-google-drive-backup/dev/ssl/localhost.key
Normal file
28
hassio-google-drive-backup/dev/ssl/localhost.key
Normal file
@@ -0,0 +1,28 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDCu0+68ol5a9Sh
|
||||
Dmeg41INbwR0QdG0khlzA54Yhu3tyhEYv7H1XE5JKwSENc1YkBTMlnmbEySW+YMp
|
||||
RXy6R/GoCaNU2wnz6UCdkJQQf6l+xIAkaRB+tj7uPpz65olC6tx5CFD+je/A6ZrH
|
||||
zAoEhiKTsQhI5uxexnl191BIQvcju7qKaN+TXmvKGlixPrYp4T30EWMDsbONyNjc
|
||||
Zr/C4Xs1SzicfscDKt8qiINP8FgdtBDxyPIa4deYVKHG/1le9L1ccPFy1+wSQQG3
|
||||
d4YED7h94ajc5chmjMkJnTTYlRKLXwMZxcsqX9ngHhPvoB5ZahGOLtjyYpxrvduY
|
||||
4kQ8XSaxAgMBAAECggEAJ1rt0S2FRSnazjX4EZb/lUFzl/9ZX3ILfKglgnV6jo1B
|
||||
CUxsrdba54SvI/0vpA9ydKqQpxumUHDa5jNp8sfpefmArfyatVXVvkJi+jaizcDu
|
||||
2Oz27XTtoP68gSSoZwLKThe1Ls0GwGk1491DxQhK4qhrsTgiW0EneQTjj8cg5XKH
|
||||
/2l0WDslZDwW8XkJ1iqGi/OPs/X4SHggzX3xEFS2SpDK0e6GovyTfijpaql3MLMR
|
||||
jnEeF69hUKKN7ADxhWvQ8d5C0CICYUzryGScVUs5312Zl83iOoeaixxfh6UaNOmE
|
||||
jjdM6Hc7VbYEcfQTdZXyIPrzcz+Tc0DSDW+QsktLMQKBgQDn7j/oCNqLhxa1XnA8
|
||||
HgQqUUTav/OWlWpieTmcyZ2LkRRw9MJTnP1FIfIvOXplWFSpbSSArAEzsjpjRt0n
|
||||
2+7VxwN3qNirNGAk3PZiRXXHq7sE3z39PhLPthpNisYTDTIx8fcYK032uEPHsSSj
|
||||
i13yKeYqeGOmfnu0nrlmZ9+ThQKBgQDW8MnvhqjMxZDdVdxZKlY/8ihnubVBlp59
|
||||
s2SFIrWD1/QcKawCzagJHe/YR865k3ti7XIBghmKwLSMa6ENdTxTSSLHbBXlXJtH
|
||||
tlWFgfVb8eDi7zo9178W8TrWEB7dSC2F6qMN17wOKWRkyo/c4cYBiAUaNQ1inJjk
|
||||
ACOvHesAPQKBgHXEttKd3EtJNzC1WYxNOZQ7XBkvqwLlr/V81NJWVhdOffC1eA95
|
||||
AeoeyJlOOGZJqgO2Ffj4XkvfzmIm05mvxeDrg0k5hXu5xrAxOzK/ToUrIHXi3dk/
|
||||
sdGjCEwjkVyPMNPHp86v/pCvFEvMGWyqEfQrbmJWa1NZmnsmtcHYMOD5AoGAD1AW
|
||||
Qt9IFVaZ7HraeOvAO0wIPuOHG0Ycwn3OUoHXhq4S8RKy83wtVYDxfmoXOzdbmf+q
|
||||
mJrpMO5rrnlYfvn0M0bJmIWFxdJkKaa+zwUkMsm3qNM8Rf2h2oOTGn8Jg+BJhfni
|
||||
ZfERr7yZL2kS+LyI+8DyBBz1eCoJ5mxwHmC2Rk0CgYBcrhxANSpikw07XLRFcvk9
|
||||
m79qiEThhmiBf1WVZdtWNi9hR+zs6mWrTk8N8jfLzNLLNMPdAAybF8feeMTa9xpS
|
||||
zXF9Gqlayzx/+wyPts7ocrdJKikDVdVZauoxG+mNE87VcVEx87ZiboirQVoKSsxe
|
||||
OmwKminJ/E4GHJCY7RLQAw==
|
||||
-----END PRIVATE KEY-----
|
||||
28
hassio-google-drive-backup/dev/ssl/privkey.pem
Normal file
28
hassio-google-drive-backup/dev/ssl/privkey.pem
Normal file
@@ -0,0 +1,28 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDQGtkBPbh3sRum
|
||||
/4IJO5rlweAIRHKT1PnqhpqeJBcf3Ck2Hvqz37ouBWwEWkMmo161UTLpBLNMGaVC
|
||||
ddS3hjprkydQrupirZOLm9+Wa032NFtGHU1tAEmInwzUGabeEtBwHGPQOsJaKwA8
|
||||
ag5Kgbd3Erl3Q2kesBYXOMdpDe8THCoF+RMb+gEtOJLC5n7tCTUr2NT3C0DgPGxX
|
||||
PLZuHhEHBedcQSQXIQnF8KbK+Tl7d94KLxnIMRDy2SLnjp5s82xdI1gcwYcJoxhp
|
||||
xFbB58YExufULIRbgdsX+rlrmaJS2j4N9m9gleIKJhW59e4+CTBU5ccPNDpFiCy0
|
||||
eqTK9YMtAgMBAAECggEADlvr4UQK+GdGCy3SIST1uSi5dpiSd1TYsa/79zFyTwZ3
|
||||
6X4VuleTlx1UqLA5te7L2CL0KlPiszuJxZ4vwUIHwehzbAPFtG1ZouZsdQqOZJCU
|
||||
Q7A96Wl9qWmgDvp+IxCVRUcQNAv54RLaf1CqD8YHjLXEClCibjWkMJIAYGVPu7ez
|
||||
44sbXenPi+4OfI5IHhhBm+RmXv6QpP/A4OyIg/X35NoIp+z+J/aajFsb6AMvFejU
|
||||
kMCj23PUv4MGA0zrc09UDzM/d7qwCeOMCW0QqKidbkZ+UtY3lsSj7b0l50TTEYsf
|
||||
2sB/xjkUVHg9sJc8ieuf8LaHedvmiQPfECjZU9VhmQKBgQDx0h359EJSvil/iQ4o
|
||||
OrsmxMz40mi/9pwznF0SUuRyKOsmJsSx7zL3rVFo/YLHOE5Ju4PSDm1OL4drUE0z
|
||||
2l/0S6tlN4teHU6x969Xqm2vpwKP3jFXpD0zEi4QRGXgqtY1sVFO4ZIKfTa3KKMu
|
||||
wqNmAB1KczvIkU71ClzqaVUULwKBgQDcTqI1SkwmIGP4PnGbLQTRI8pmw4xx/d7X
|
||||
bpgAeCegSwfCy94nX7TdDYujhxa1rp3ya5YSnkTTN7oGCXIsZkLjmfFmjiIh3uEk
|
||||
YX0obydQvVUfnPTPXQP3QhZG2dQtFdUUJOsu1bJKC7a/jcLGqbJzeBUg/Sb0/gXP
|
||||
KCPCCr5bYwKBgHrbVX94KXoAQvUYnKizrgG0Wq7Pt4hPsmxGNMLqekXFpDJt3+DG
|
||||
tg4/b+z3X0n3wU6UhhRiYAYo/5P16EM/3yAukZWK8rOOED06qUrQu4lSQGr3Z/ou
|
||||
5yjbQ6vgFCJgqRP+UmDRGXFazEGh08Yd/QYFaNw6T1VG/eZgrXQqr57hAoGBALcb
|
||||
qFiQm0ApNc4T4IrwXQuTKtxE9guczUXTxwTE2XKySg4PMmMZehMs+f39/tMdAmyG
|
||||
HWL2JxBDRhtUaJAcosXXorvxsM7kF88MNGGSGWRTKVgwNY3QqsYtKKTU0jRy6/pl
|
||||
QRBZT2mZ2NfXdKd4TjkI+s7DekiwhZWLsETMdzEvAoGARDyJNOpPPm/VpDgV08uU
|
||||
P1yPOT6j8qhQ2dN1mEab0NeyY6HGriUg8y6HJ81Obt4YyVPlEplDJe8TkphWNsby
|
||||
B93FpH56WF4g8ivKD4oC2JghlWf4c0MgxiWyoNvlHSM7Dmq2UfPDyV+1UhnNH1ty
|
||||
CUMs7Fjk4BeJbrYmJf3VxYU=
|
||||
-----END PRIVATE KEY-----
|
||||
BIN
hassio-google-drive-backup/icon.png
Normal file
BIN
hassio-google-drive-backup/icon.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 7.1 KiB |
BIN
hassio-google-drive-backup/logo.png
Normal file
BIN
hassio-google-drive-backup/logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 9.3 KiB |
19
hassio-google-drive-backup/requirements-addon.txt
Normal file
19
hassio-google-drive-backup/requirements-addon.txt
Normal file
@@ -0,0 +1,19 @@
|
||||
google-api-python-client
|
||||
google-auth-httplib2
|
||||
google-auth-oauthlib
|
||||
oauth2client
|
||||
python-dateutil
|
||||
watchdog
|
||||
pyyaml
|
||||
dnspython
|
||||
aiorun
|
||||
aiohttp
|
||||
aiodns
|
||||
injector
|
||||
aiofiles
|
||||
aiofile
|
||||
colorlog
|
||||
aiohttp-jinja2
|
||||
aioping
|
||||
pytz
|
||||
tzlocal
|
||||
20
hassio-google-drive-backup/requirements-server.txt
Normal file
20
hassio-google-drive-backup/requirements-server.txt
Normal file
@@ -0,0 +1,20 @@
|
||||
aiodns
|
||||
aiofiles
|
||||
aiofile
|
||||
aiohttp
|
||||
aiorun
|
||||
colorlog
|
||||
dnspython
|
||||
google-cloud-logging
|
||||
google-cloud-firestore
|
||||
injector
|
||||
oauth2client
|
||||
ptvsd
|
||||
python-dateutil
|
||||
pyyaml
|
||||
watchdog
|
||||
aiohttp-jinja2
|
||||
firebase-admin
|
||||
pytz
|
||||
tzlocal
|
||||
aioping
|
||||
8
hassio-google-drive-backup/setup.py
Normal file
8
hassio-google-drive-backup/setup.py
Normal file
@@ -0,0 +1,8 @@
|
||||
from setuptools import setup, find_packages
|
||||
setup(
|
||||
name="hgdb",
|
||||
packages=find_packages(),
|
||||
package_data={
|
||||
'backup': ['static/*', 'static/*/*', 'static/*/*/*']
|
||||
}
|
||||
)
|
||||
0
hassio-google-drive-backup/snapshot.json
Normal file
0
hassio-google-drive-backup/snapshot.json
Normal file
0
hassio-google-drive-backup/tests/__init__.py
Normal file
0
hassio-google-drive-backup/tests/__init__.py
Normal file
427
hassio-google-drive-backup/tests/conftest.py
Normal file
427
hassio-google-drive-backup/tests/conftest.py
Normal file
@@ -0,0 +1,427 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import tempfile
|
||||
import asyncio
|
||||
import platform
|
||||
import aiohttp
|
||||
from yarl import URL
|
||||
|
||||
import pytest
|
||||
from aiohttp import ClientSession
|
||||
from injector import (ClassAssistedBuilder, Injector, Module, inject, provider,
|
||||
singleton)
|
||||
|
||||
from backup.config import Config, Setting
|
||||
from backup.model import Coordinator
|
||||
from dev.simulationserver import SimulationServer
|
||||
from backup.drive import DriveRequests, DriveSource, FolderFinder, AuthCodeQuery
|
||||
from backup.util import GlobalInfo, Estimator, Resolver, DataCache
|
||||
from backup.ha import HaRequests, HaSource, HaUpdater
|
||||
from backup.logger import reset
|
||||
from backup.model import DummyBackup, DestinationPrecache, Model
|
||||
from backup.time import Time
|
||||
from backup.module import BaseModule
|
||||
from backup.debugworker import DebugWorker
|
||||
from backup.creds import Creds, DriveRequester
|
||||
from backup.server import ErrorStore
|
||||
from backup.ha import AddonStopper
|
||||
from backup.ui import UiServer
|
||||
from backup.watcher import Watcher
|
||||
from .faketime import FakeTime
|
||||
from .helpers import Uploader, createBackupTar
|
||||
from dev.ports import Ports
|
||||
from dev.simulated_google import SimulatedGoogle
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from dev.simulated_supervisor import SimulatedSupervisor
|
||||
|
||||
|
||||
@singleton
|
||||
class FsFaker():
|
||||
@inject
|
||||
def __init__(self):
|
||||
self.bytes_free = 1024 * 1024 * 1024
|
||||
self.bytes_total = 1024 * 1024 * 1024
|
||||
self.old_method = None
|
||||
|
||||
def start(self):
|
||||
if platform.system() != "Windows":
|
||||
self.old_method = os.statvfs
|
||||
os.statvfs = self._hijack
|
||||
|
||||
def stop(self):
|
||||
if platform.system() != "Windows":
|
||||
os.statvfs = self.old_method
|
||||
|
||||
def _hijack(self, path):
|
||||
return os.statvfs_result((0, 1, int(self.bytes_total), int(self.bytes_free), int(self.bytes_free), 0, 0, 0, 0, 255))
|
||||
|
||||
def setFreeBytes(self, bytes_free, bytes_total=1):
|
||||
self.bytes_free = bytes_free
|
||||
self.bytes_total = bytes_total
|
||||
if self.bytes_free > self.bytes_total:
|
||||
self.bytes_total = self.bytes_free
|
||||
|
||||
|
||||
class ReaderHelper:
|
||||
def __init__(self, session, ui_port, ingress_port):
|
||||
self.session = session
|
||||
self.ui_port = ui_port
|
||||
self.ingress_port = ingress_port
|
||||
self.timeout = aiohttp.ClientTimeout(total=20)
|
||||
|
||||
def getUrl(self, ingress=True, ssl=False):
|
||||
if ssl:
|
||||
protocol = "https"
|
||||
else:
|
||||
protocol = "http"
|
||||
if ingress:
|
||||
return protocol + "://localhost:" + str(self.ingress_port) + "/"
|
||||
else:
|
||||
return protocol + "://localhost:" + str(self.ui_port) + "/"
|
||||
|
||||
async def getjson(self, path, status=200, json=None, auth=None, ingress=True, ssl=False, sslcontext=None):
|
||||
async with self.session.get(self.getUrl(ingress, ssl) + path, json=json, auth=auth, ssl=sslcontext, timeout=self.timeout) as resp:
|
||||
assert resp.status == status
|
||||
return await resp.json()
|
||||
|
||||
async def get(self, path, status=200, json=None, auth=None, ingress=True, ssl=False):
|
||||
async with self.session.get(self.getUrl(ingress, ssl) + path, json=json, auth=auth, timeout=self.timeout) as resp:
|
||||
if resp.status != status:
|
||||
import logging
|
||||
logging.getLogger().error(resp.text())
|
||||
assert resp.status == status
|
||||
return await resp.text()
|
||||
|
||||
async def postjson(self, path, status=200, json=None, ingress=True):
|
||||
async with self.session.post(self.getUrl(ingress) + path, json=json, timeout=self.timeout) as resp:
|
||||
assert resp.status == status
|
||||
return await resp.json()
|
||||
|
||||
async def assertError(self, path, error_type="generic_error", status=500, ingress=True, json=None):
|
||||
logging.getLogger().info("Requesting " + path)
|
||||
data = await self.getjson(path, status=status, ingress=ingress, json=json)
|
||||
assert data['error_type'] == error_type
|
||||
|
||||
|
||||
# This module should onyl ever have bindings that can also be satisfied by MainModule
|
||||
class TestModule(Module):
|
||||
def __init__(self, config: Config, ports: Ports):
|
||||
self.ports = ports
|
||||
self.config = config
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getDriveCreds(self, time: Time) -> Creds:
|
||||
return Creds(time, "test_client_id", time.now(), "test_access_token", "test_refresh_token", "test_client_secret")
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getTime(self) -> Time:
|
||||
return FakeTime()
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getPorts(self) -> Ports:
|
||||
return self.ports
|
||||
|
||||
@provider
|
||||
@singleton
|
||||
def getConfig(self) -> Config:
|
||||
return self.config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def event_loop():
|
||||
if platform.system() == "Windows":
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||
return asyncio.new_event_loop()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def generate_config(server_url: URL, ports, cleandir):
|
||||
return Config.withOverrides({
|
||||
Setting.DRIVE_URL: str(server_url),
|
||||
Setting.SUPERVISOR_URL: str(server_url) + "/",
|
||||
Setting.AUTHORIZATION_HOST: str(server_url),
|
||||
Setting.TOKEN_SERVER_HOSTS: str(server_url),
|
||||
Setting.DRIVE_REFRESH_URL: str(server_url.with_path("/oauth2/v4/token")),
|
||||
Setting.DRIVE_AUTHORIZE_URL: str(server_url.with_path("/o/oauth2/v2/auth")),
|
||||
Setting.DRIVE_TOKEN_URL: str(server_url.with_path("/token")),
|
||||
Setting.DRIVE_DEVICE_CODE_URL: str(server_url.with_path("/device/code")),
|
||||
Setting.SUPERVISOR_TOKEN: "test_header",
|
||||
Setting.SECRETS_FILE_PATH: "secrets.yaml",
|
||||
Setting.CREDENTIALS_FILE_PATH: "credentials.dat",
|
||||
Setting.FOLDER_FILE_PATH: "folder.dat",
|
||||
Setting.RETAINED_FILE_PATH: "retained.json",
|
||||
Setting.ID_FILE_PATH: "id.json",
|
||||
Setting.DATA_CACHE_FILE_PATH: "data_cache.json",
|
||||
Setting.STOP_ADDON_STATE_PATH: "stop_addon.json",
|
||||
Setting.INGRESS_TOKEN_FILE_PATH: "ingress.dat",
|
||||
Setting.DEFAULT_DRIVE_CLIENT_ID: "test_client_id",
|
||||
Setting.DEFAULT_DRIVE_CLIENT_SECRET: "test_client_secret",
|
||||
Setting.BACKUP_DIRECTORY_PATH: os.path.join(cleandir, "backups"),
|
||||
Setting.PORT: ports.ui,
|
||||
Setting.INGRESS_PORT: ports.ingress,
|
||||
Setting.BACKUP_STARTUP_DELAY_MINUTES: 0,
|
||||
Setting.PING_TIMEOUT: 0.1,
|
||||
})
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def injector(cleandir, ports, generate_config):
|
||||
drive_creds = Creds(FakeTime(), "test_client_id", None, "test_access_token", "test_refresh_token")
|
||||
|
||||
os.mkdir(os.path.join(cleandir, "backups"))
|
||||
with open(os.path.join(cleandir, "secrets.yaml"), "w") as f:
|
||||
f.write("for_unit_tests: \"password value\"\n")
|
||||
|
||||
with open(os.path.join(cleandir, "credentials.dat"), "w") as f:
|
||||
f.write(json.dumps(drive_creds.serialize()))
|
||||
|
||||
return Injector([BaseModule(), TestModule(generate_config, ports)])
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ui_server(injector, server):
|
||||
os.mkdir("static")
|
||||
server = injector.get(UiServer)
|
||||
await server.run()
|
||||
yield server
|
||||
await server.shutdown()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def reader(server, ui_server, session, ui_port, ingress_port):
|
||||
return ReaderHelper(session, ui_port, ingress_port)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def uploader(injector: Injector, server_url):
|
||||
return injector.get(ClassAssistedBuilder[Uploader]).build(host=str(server_url))
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def google(injector: Injector):
|
||||
return injector.get(SimulatedGoogle)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def interceptor(injector: Injector):
|
||||
return injector.get(RequestInterceptor)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def supervisor(injector: Injector, server, session):
|
||||
return injector.get(SimulatedSupervisor)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def addon_stopper(injector: Injector):
|
||||
return injector.get(AddonStopper)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def server(injector, port, drive_creds: Creds, session):
|
||||
server = injector.get(SimulationServer)
|
||||
|
||||
# start the server
|
||||
logging.getLogger().info("Starting SimulationServer on port " + str(port))
|
||||
await server.start(port)
|
||||
yield server
|
||||
await server.stop()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def data_cache(injector):
|
||||
return injector.get(DataCache)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def session(injector):
|
||||
async with injector.get(ClientSession) as session:
|
||||
yield session
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def precache(injector):
|
||||
return injector.get(DestinationPrecache)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def backup(coord, source, dest):
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
return coord.backups()[0]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def fs(injector):
|
||||
faker = injector.get(FsFaker)
|
||||
faker.start()
|
||||
yield faker
|
||||
faker.stop()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def estimator(injector, fs):
|
||||
return injector.get(Estimator)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def device_code(injector):
|
||||
return injector.get(AuthCodeQuery)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def error_store(injector):
|
||||
return injector.get(ErrorStore)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def model(injector):
|
||||
return injector.get(Model)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def global_info(injector):
|
||||
return injector.get(GlobalInfo)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def server_url(port):
|
||||
return URL("http://localhost:").with_port(port)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ports(unused_tcp_port_factory):
|
||||
return Ports(unused_tcp_port_factory(), unused_tcp_port_factory(), unused_tcp_port_factory())
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def port(ports: Ports):
|
||||
return ports.server
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ui_url(ports: Ports):
|
||||
return URL("http://localhost").with_port(ports.ingress)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ui_port(ports: Ports):
|
||||
return ports.ui
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ingress_port(ports: Ports):
|
||||
return ports.ingress
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def coord(injector):
|
||||
return injector.get(Coordinator)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
async def updater(injector):
|
||||
return injector.get(HaUpdater)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
async def cleandir():
|
||||
newpath = tempfile.mkdtemp()
|
||||
os.chdir(newpath)
|
||||
return newpath
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def time(injector):
|
||||
reset()
|
||||
return injector.get(Time)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def config(injector):
|
||||
return injector.get(Config)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def drive_creds(injector):
|
||||
return injector.get(Creds)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def drive(injector, server, session):
|
||||
return injector.get(DriveSource)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ha(injector, server, session):
|
||||
return injector.get(HaSource)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def ha_requests(injector, server):
|
||||
return injector.get(HaRequests)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def drive_requests(injector, server):
|
||||
return injector.get(DriveRequests)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def drive_requester(injector, server):
|
||||
return injector.get(DriveRequester)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def verify_closed_responses(drive_requester: DriveRequester):
|
||||
yield "unused"
|
||||
for resp in drive_requester.all_resposnes:
|
||||
assert resp.closed
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def resolver(injector):
|
||||
return injector.get(Resolver)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def client_identifier(injector):
|
||||
return injector.get(Config).clientIdentifier()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def debug_worker(injector):
|
||||
return injector.get(DebugWorker)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
async def folder_finder(injector):
|
||||
return injector.get(FolderFinder)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
async def watcher(injector):
|
||||
watcher = injector.get(Watcher)
|
||||
yield watcher
|
||||
await watcher.stop()
|
||||
|
||||
|
||||
class BackupHelper():
|
||||
def __init__(self, uploader, time):
|
||||
self.time = time
|
||||
self.uploader = uploader
|
||||
|
||||
async def createFile(self, size=1024 * 1024 * 2, slug="testslug", name="Test Name"):
|
||||
from_backup: DummyBackup = DummyBackup(
|
||||
name, self.time.toUtc(self.time.local(1985, 12, 6)), "fake source", slug)
|
||||
data = await self.uploader.upload(createBackupTar(slug, name, self.time.now(), size))
|
||||
return from_backup, data
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def backup_helper(uploader, time):
|
||||
return BackupHelper(uploader, time)
|
||||
0
hassio-google-drive-backup/tests/drive/__init__.py
Normal file
0
hassio-google-drive-backup/tests/drive/__init__.py
Normal file
71
hassio-google-drive-backup/tests/drive/test_driverequests.py
Normal file
71
hassio-google-drive-backup/tests/drive/test_driverequests.py
Normal file
@@ -0,0 +1,71 @@
|
||||
import os
|
||||
import json
|
||||
from time import sleep
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
from yarl import URL
|
||||
from aiohttp.client_exceptions import ClientResponseError
|
||||
from backup.config import Config, Setting
|
||||
from dev.simulationserver import SimulationServer
|
||||
from dev.simulated_google import SimulatedGoogle, URL_MATCH_UPLOAD_PROGRESS, URL_MATCH_FILE
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from backup.drive import DriveSource, FolderFinder, DriveRequests, RETRY_SESSION_ATTEMPTS, UPLOAD_SESSION_EXPIRATION_DURATION, URL_START_UPLOAD
|
||||
from backup.drive.driverequests import (BASE_CHUNK_SIZE, CHUNK_UPLOAD_TARGET_SECONDS)
|
||||
from backup.drive.drivesource import FOLDER_MIME_TYPE
|
||||
from backup.exceptions import (BackupFolderInaccessible, BackupFolderMissingError,
|
||||
DriveQuotaExceeded, ExistingBackupFolderError,
|
||||
GoogleCantConnect, GoogleCredentialsExpired,
|
||||
GoogleInternalError, GoogleUnexpectedError,
|
||||
GoogleSessionError, GoogleTimeoutError, CredRefreshMyError, CredRefreshGoogleError)
|
||||
from backup.creds import Creds
|
||||
from backup.model import DriveBackup, DummyBackup
|
||||
from ..faketime import FakeTime
|
||||
from ..helpers import compareStreams, createBackupTar
|
||||
|
||||
|
||||
class BackupHelper():
|
||||
def __init__(self, uploader, time):
|
||||
self.time = time
|
||||
self.uploader = uploader
|
||||
|
||||
async def createFile(self, size=1024 * 1024 * 2, slug="testslug", name="Test Name", note=None):
|
||||
from_backup: DummyBackup = DummyBackup(
|
||||
name, self.time.toUtc(self.time.local(1985, 12, 6)), "fake source", slug, note=note, size=size)
|
||||
data = await self.uploader.upload(createBackupTar(slug, name, self.time.now(), size))
|
||||
return from_backup, data
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_minimum_chunk_size(drive_requests: DriveRequests, time: FakeTime, backup_helper: BackupHelper, config: Config):
|
||||
config.override(Setting.UPLOAD_LIMIT_BYTES_PER_SECOND, BASE_CHUNK_SIZE)
|
||||
from_backup, data = await backup_helper.createFile(BASE_CHUNK_SIZE * 10)
|
||||
async with data:
|
||||
async for progress in drive_requests.create(data, {}, "unused"):
|
||||
assert time.sleeps[-1] == 1
|
||||
assert len(time.sleeps) == 11
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_lower_chunk_size(drive_requests: DriveRequests, time: FakeTime, backup_helper: BackupHelper, config: Config):
|
||||
config.override(Setting.UPLOAD_LIMIT_BYTES_PER_SECOND, BASE_CHUNK_SIZE / 2)
|
||||
from_backup, data = await backup_helper.createFile(BASE_CHUNK_SIZE * 10)
|
||||
|
||||
# It should still upload in 256 kb chunks, just with more delay
|
||||
async with data:
|
||||
async for progress in drive_requests.create(data, {}, "unused"):
|
||||
assert time.sleeps[-1] == 2
|
||||
assert len(time.sleeps) == 11
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_higher_speed_limit(drive_requests: DriveRequests, time: FakeTime, backup_helper: BackupHelper, config: Config):
|
||||
config.override(Setting.UPLOAD_LIMIT_BYTES_PER_SECOND, BASE_CHUNK_SIZE * 2)
|
||||
from_backup, data = await backup_helper.createFile(BASE_CHUNK_SIZE * 10)
|
||||
|
||||
# It should still upload in 256 kb chunks, just with more delay
|
||||
async with data:
|
||||
async for progress in drive_requests.create(data, {}, "unused"):
|
||||
assert time.sleeps[-1] == 0.5
|
||||
assert len(time.sleeps) == 11
|
||||
|
||||
54
hassio-google-drive-backup/tests/faketime.py
Normal file
54
hassio-google-drive-backup/tests/faketime.py
Normal file
@@ -0,0 +1,54 @@
|
||||
import asyncio
|
||||
from datetime import datetime, timedelta
|
||||
from backup.time import Time
|
||||
from pytz import timezone
|
||||
|
||||
|
||||
class FakeTime(Time):
|
||||
def __init__(self, now: datetime = None):
|
||||
super().__init__(local_tz=timezone('EST'))
|
||||
if now:
|
||||
self._now = now
|
||||
else:
|
||||
self._now = self.toUtc(
|
||||
datetime(1985, 12, 6, 0, 0, 0, tzinfo=timezone('EST')))
|
||||
self._start = self._now
|
||||
self.sleeps = []
|
||||
|
||||
def setTimeZone(self, tz):
|
||||
if isinstance(tz, str):
|
||||
self.local_tz = timezone(tz)
|
||||
else:
|
||||
self.local_tz = tz
|
||||
|
||||
def monotonic(self):
|
||||
return (self._now - self._start).total_seconds()
|
||||
|
||||
def setNow(self, now: datetime):
|
||||
self._now = now
|
||||
return self
|
||||
|
||||
def advanceDay(self, days=1):
|
||||
return self.advance(days=1)
|
||||
|
||||
def advance(self, days=0, hours=0, minutes=0, seconds=0, duration=None):
|
||||
self._now = self._now + \
|
||||
timedelta(days=days, hours=hours, seconds=seconds, minutes=minutes)
|
||||
if duration is not None:
|
||||
self._now = self._now + duration
|
||||
return self
|
||||
|
||||
def now(self) -> datetime:
|
||||
return self._now
|
||||
|
||||
def nowLocal(self) -> datetime:
|
||||
return self.toLocal(self._now)
|
||||
|
||||
async def sleepAsync(self, seconds: float, _exit_early: asyncio.Event = None):
|
||||
self.sleeps.append(seconds)
|
||||
self._now = self._now + timedelta(seconds=seconds)
|
||||
# allow the task to be interrupted if such a thing is requested.
|
||||
await asyncio.sleep(0)
|
||||
|
||||
def clearSleeps(self):
|
||||
self.sleeps = []
|
||||
219
hassio-google-drive-backup/tests/helpers.py
Normal file
219
hassio-google-drive-backup/tests/helpers.py
Normal file
@@ -0,0 +1,219 @@
|
||||
import json
|
||||
import tarfile
|
||||
import pytest
|
||||
import platform
|
||||
import os
|
||||
from datetime import datetime
|
||||
from io import BytesIO, IOBase
|
||||
|
||||
from aiohttp import ClientSession
|
||||
from injector import inject, singleton
|
||||
|
||||
from backup.util import AsyncHttpGetter
|
||||
from backup.model import SimulatedSource
|
||||
from backup.time import Time
|
||||
from backup.config import CreateOptions
|
||||
|
||||
all_folders = [
|
||||
"share",
|
||||
"ssl",
|
||||
"addons/local"
|
||||
]
|
||||
all_addons = [
|
||||
{
|
||||
"name": "Sexy Robots",
|
||||
"slug": "sexy_robots",
|
||||
"description": "The robots you already know, but sexier. See what they don't want you to see.",
|
||||
"version": "0.69",
|
||||
"size": 1,
|
||||
"logo": True,
|
||||
"state": "started"
|
||||
},
|
||||
{
|
||||
"name": "Particle Accelerator",
|
||||
"slug": "particla_accel",
|
||||
"description": "What CAN'T you do with Home Assistant?",
|
||||
"version": "0.5",
|
||||
"size": 500.3,
|
||||
"logo": True,
|
||||
"state": "started"
|
||||
},
|
||||
{
|
||||
"name": "Empty Addon",
|
||||
"slug": "addon_empty",
|
||||
"description": "Explore the meaning of the universe by contemplating whats missing.",
|
||||
"version": "0.-1",
|
||||
"size": 1024 * 1024 * 1024 * 21.2,
|
||||
"logo": False,
|
||||
"state": "started"
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
def skipForWindows():
|
||||
if platform.system() == "Windows":
|
||||
pytest.skip("This test can't be run in windows environments")
|
||||
|
||||
|
||||
def skipForRoot():
|
||||
if os.getuid() == 0:
|
||||
pytest.skip("This test can't be run as root")
|
||||
|
||||
|
||||
def createBackupTar(slug: str, name: str, date: datetime, padSize: int, included_folders=None, included_addons=None, password=None) -> BytesIO:
|
||||
backup_type = "full"
|
||||
haVersion = None
|
||||
if included_folders is not None:
|
||||
folders = []
|
||||
for folder in included_folders:
|
||||
if folder == "homeassistant":
|
||||
haVersion = "0.92.2"
|
||||
else:
|
||||
folders.append(folder)
|
||||
else:
|
||||
folders = all_folders.copy()
|
||||
haVersion = "0.92.2"
|
||||
|
||||
if included_addons is not None:
|
||||
backup_type = "partial"
|
||||
addons = []
|
||||
for addon in all_addons:
|
||||
if addon['slug'] in included_addons:
|
||||
addons.append(addon)
|
||||
else:
|
||||
addons = all_addons.copy()
|
||||
|
||||
backup_info = {
|
||||
"slug": slug,
|
||||
"name": name,
|
||||
"date": date.isoformat(),
|
||||
"type": backup_type,
|
||||
"protected": password is not None,
|
||||
"homeassistant": haVersion,
|
||||
"folders": folders,
|
||||
"addons": addons,
|
||||
"repositories": [
|
||||
"https://github.com/hassio-addons/repository"
|
||||
]
|
||||
}
|
||||
stream = BytesIO()
|
||||
tar = tarfile.open(fileobj=stream, mode="w")
|
||||
add(tar, "backup.json", BytesIO(json.dumps(backup_info).encode()))
|
||||
add(tar, "padding.dat", getTestStream(padSize))
|
||||
tar.close()
|
||||
stream.seek(0)
|
||||
stream.size = lambda: len(stream.getbuffer())
|
||||
return stream
|
||||
|
||||
|
||||
def add(tar, name, stream):
|
||||
info = tarfile.TarInfo(name)
|
||||
info.size = len(stream.getbuffer())
|
||||
stream.seek(0)
|
||||
tar.addfile(info, stream)
|
||||
|
||||
|
||||
def parseBackupInfo(stream: BytesIO):
|
||||
with tarfile.open(fileobj=stream, mode="r") as tar:
|
||||
info = tar.getmember("backup.json")
|
||||
with tar.extractfile(info) as f:
|
||||
backup_data = json.load(f)
|
||||
backup_data['size'] = float(
|
||||
round(len(stream.getbuffer()) / 1024.0 / 1024.0, 2))
|
||||
backup_data['version'] = 'dev'
|
||||
return backup_data
|
||||
|
||||
|
||||
def getTestStream(size: int):
|
||||
"""
|
||||
Produces a stream of repeating prime sequences to avoid accidental repetition
|
||||
"""
|
||||
arr = bytearray()
|
||||
while True:
|
||||
for prime in [4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937]:
|
||||
for x in range(prime):
|
||||
if len(arr) < size:
|
||||
arr.append(x % 255)
|
||||
else:
|
||||
break
|
||||
if len(arr) >= size:
|
||||
break
|
||||
if len(arr) >= size:
|
||||
break
|
||||
return BytesIO(arr)
|
||||
|
||||
|
||||
async def compareStreams(left, right):
|
||||
await left.setup()
|
||||
await right.setup()
|
||||
while True:
|
||||
from_left = await left.read(1024 * 1024)
|
||||
from_right = await right.read(1024 * 1024)
|
||||
if len(from_left.getbuffer()) == 0:
|
||||
assert len(from_right.getbuffer()) == 0
|
||||
break
|
||||
if from_left.getbuffer() != from_right.getbuffer():
|
||||
print("break!")
|
||||
assert from_left.getbuffer() == from_right.getbuffer()
|
||||
|
||||
|
||||
class IntentionalFailure(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class HelperTestSource(SimulatedSource):
|
||||
def __init__(self, name, is_destination=False):
|
||||
super().__init__(name, is_destination=is_destination)
|
||||
self.allow_create = True
|
||||
self.allow_save = True
|
||||
self.queries = 0
|
||||
|
||||
def reset(self):
|
||||
self.saved = []
|
||||
self.deleted = []
|
||||
self.created = []
|
||||
self.queries = 0
|
||||
|
||||
@property
|
||||
def query_count(self):
|
||||
return self.queries
|
||||
|
||||
async def get(self):
|
||||
self.queries += 1
|
||||
return await super().get()
|
||||
|
||||
def assertThat(self, created=0, deleted=0, saved=0, current=0):
|
||||
assert len(self.saved) == saved
|
||||
assert len(self.deleted) == deleted
|
||||
assert len(self.created) == created
|
||||
assert len(self.current) == current
|
||||
return self
|
||||
|
||||
def assertUnchanged(self):
|
||||
self.assertThat(current=len(self.current))
|
||||
return self
|
||||
|
||||
async def create(self, options: CreateOptions):
|
||||
if not self.allow_create:
|
||||
raise IntentionalFailure()
|
||||
return await super().create(options)
|
||||
|
||||
async def save(self, backup, bytes: IOBase = None):
|
||||
if not self.allow_save:
|
||||
raise IntentionalFailure()
|
||||
return await super().save(backup, bytes=bytes)
|
||||
|
||||
|
||||
@singleton
|
||||
class Uploader():
|
||||
@inject
|
||||
def __init__(self, host, session: ClientSession, time: Time):
|
||||
self.host = host
|
||||
self.session = session
|
||||
self.time = time
|
||||
|
||||
async def upload(self, data) -> AsyncHttpGetter:
|
||||
async with await self.session.post(self.host + "/uploadfile", data=data) as resp:
|
||||
resp.raise_for_status()
|
||||
source = AsyncHttpGetter(self.host + "/readfile", {}, self.session, time=self.time)
|
||||
return source
|
||||
355
hassio-google-drive-backup/tests/test_addon_stopper.py
Normal file
355
hassio-google-drive-backup/tests/test_addon_stopper.py
Normal file
@@ -0,0 +1,355 @@
|
||||
import json
|
||||
import pytest
|
||||
import os
|
||||
|
||||
from stat import S_IREAD
|
||||
from backup.config import Config, Setting
|
||||
from backup.ha import AddonStopper
|
||||
from backup.exceptions import SupervisorFileSystemError
|
||||
from .faketime import FakeTime
|
||||
from dev.simulated_supervisor import SimulatedSupervisor, URL_MATCH_START_ADDON, URL_MATCH_STOP_ADDON, URL_MATCH_ADDON_INFO
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from .helpers import skipForRoot
|
||||
|
||||
|
||||
def getSaved(config: Config):
|
||||
with open(config.get(Setting.STOP_ADDON_STATE_PATH)) as f:
|
||||
data = json.load(f)
|
||||
return set(data["start"]), set(data["watchdog"])
|
||||
|
||||
|
||||
def save(config: Config, to_start, to_watchdog_enable):
|
||||
with open(config.get(Setting.STOP_ADDON_STATE_PATH), "w") as f:
|
||||
json.dump({"start": list(to_start), "watchdog": list(to_watchdog_enable)}, f)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_stop_config(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug = "test_slug_1"
|
||||
supervisor.installAddon(slug, "Test decription")
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.isBackingUp(False)
|
||||
assert supervisor.addon(slug)["state"] == "started"
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert supervisor.addon(slug)["state"] == "started"
|
||||
await addon_stopper.check()
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug)["state"] == "started"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_load_addons_on_boot(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
slug2 = "test_slug_2"
|
||||
supervisor.installAddon(slug2, "Test decription")
|
||||
slug3 = "test_slug_3"
|
||||
supervisor.installAddon(slug3, "Test decription")
|
||||
|
||||
config.override(Setting.STOP_ADDONS, slug1)
|
||||
|
||||
save(config, {slug3}, {slug2})
|
||||
|
||||
await addon_stopper.start(False)
|
||||
assert addon_stopper.must_start == {slug3}
|
||||
assert addon_stopper.must_enable_watchdog == {slug2}
|
||||
|
||||
addon_stopper.allowRun()
|
||||
assert addon_stopper.must_start == {slug1, slug3}
|
||||
assert addon_stopper.must_enable_watchdog == {slug2}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_do_nothing_while_backing_up(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
slug2 = "test_slug_2"
|
||||
supervisor.installAddon(slug2, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1, slug2]))
|
||||
|
||||
await addon_stopper.start(False)
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.isBackingUp(True)
|
||||
assert addon_stopper.must_start == {slug1, slug2}
|
||||
|
||||
await addon_stopper.check()
|
||||
|
||||
assert not interceptor.urlWasCalled(URL_MATCH_START_ADDON)
|
||||
assert not interceptor.urlWasCalled(URL_MATCH_STOP_ADDON)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_start_and_stop(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_start_and_stop_error(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
supervisor.addon(slug1)["state"] = "error"
|
||||
assert supervisor.addon(slug1)["state"] == "error"
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stop_failure(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, slug1)
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
interceptor.setError(URL_MATCH_STOP_ADDON, 400)
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert interceptor.urlWasCalled(URL_MATCH_STOP_ADDON)
|
||||
assert getSaved(config) == (set(), set())
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
await addon_stopper.check()
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_start_failure(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor, time: FakeTime) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
interceptor.setError(URL_MATCH_START_ADDON, 400)
|
||||
await addon_stopper.startAddons()
|
||||
assert getSaved(config) == (set(), set())
|
||||
assert interceptor.urlWasCalled(URL_MATCH_START_ADDON)
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delayed_start(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor, time: FakeTime) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
# start the addon again, which simluates the supervisor's tendency to report an addon as started right after stopping it.
|
||||
supervisor.addon(slug1)["state"] = "started"
|
||||
await addon_stopper.check()
|
||||
await addon_stopper.startAddons()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
time.advance(seconds=30)
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
time.advance(seconds=30)
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
time.advance(seconds=30)
|
||||
supervisor.addon(slug1)["state"] = "stopped"
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delayed_start_give_up(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor, time: FakeTime) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
# start the addon again, which simluates the supervisor's tendency to report an addon as started right after stopping it.
|
||||
supervisor.addon(slug1)["state"] = "started"
|
||||
await addon_stopper.check()
|
||||
await addon_stopper.startAddons()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
time.advance(seconds=30)
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
time.advance(seconds=30)
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
# Should clear saved state after this, since it stops checking after 2 minutes.
|
||||
time.advance(seconds=100)
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_disable_watchdog(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
supervisor.addon(slug1)["watchdog"] = True
|
||||
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
assert supervisor.addon(slug1)["watchdog"] is False
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
assert supervisor.addon(slug1)["watchdog"] is False
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert supervisor.addon(slug1)["watchdog"] is True
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_enable_watchdog_on_reboot(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, time: FakeTime) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
supervisor.addon(slug1)["watchdog"] = False
|
||||
save(config, set(), {slug1})
|
||||
|
||||
await addon_stopper.start(False)
|
||||
addon_stopper.allowRun()
|
||||
assert addon_stopper.must_enable_watchdog == {slug1}
|
||||
|
||||
time.advance(minutes=5)
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["watchdog"] is True
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_enable_watchdog_waits_for_start(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
supervisor.addon(slug1)["watchdog"] = False
|
||||
save(config, {slug1}, {slug1})
|
||||
|
||||
await addon_stopper.start(False)
|
||||
addon_stopper.allowRun()
|
||||
assert addon_stopper.must_enable_watchdog == {slug1}
|
||||
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, {slug1})
|
||||
|
||||
supervisor.addon(slug1)["state"] = "stopped"
|
||||
await addon_stopper.check()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert supervisor.addon(slug1)["watchdog"] is True
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_info_failure_on_stop(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, slug1)
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
interceptor.setError(URL_MATCH_ADDON_INFO, 400)
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert interceptor.urlWasCalled(URL_MATCH_ADDON_INFO)
|
||||
assert getSaved(config) == (set(), set())
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
await addon_stopper.check()
|
||||
await addon_stopper.startAddons()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
assert getSaved(config) == (set(), set())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_info_failure_on_start(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor) -> None:
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
interceptor.setError(URL_MATCH_ADDON_INFO, 400)
|
||||
await addon_stopper.startAddons()
|
||||
assert getSaved(config) == (set(), set())
|
||||
assert interceptor.urlWasCalled(URL_MATCH_ADDON_INFO)
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_read_only_fs(supervisor: SimulatedSupervisor, addon_stopper: AddonStopper, config: Config, interceptor: RequestInterceptor) -> None:
|
||||
# This test can't be run as the root user, since no file is read-only to root.
|
||||
skipForRoot()
|
||||
|
||||
# Stop an addon
|
||||
slug1 = "test_slug_1"
|
||||
supervisor.installAddon(slug1, "Test decription")
|
||||
config.override(Setting.STOP_ADDONS, ",".join([slug1]))
|
||||
addon_stopper.allowRun()
|
||||
addon_stopper.must_start = set()
|
||||
assert supervisor.addon(slug1)["state"] == "started"
|
||||
await addon_stopper.stopAddons("ignore")
|
||||
assert supervisor.addon(slug1)["state"] == "stopped"
|
||||
await addon_stopper.check()
|
||||
assert getSaved(config) == ({slug1}, set())
|
||||
|
||||
# make the state file unmodifiable
|
||||
os.chmod(config.get(Setting.STOP_ADDON_STATE_PATH), S_IREAD)
|
||||
|
||||
# verify we raise a known error when trying to save.
|
||||
with pytest.raises(SupervisorFileSystemError):
|
||||
await addon_stopper.startAddons()
|
||||
117
hassio-google-drive-backup/tests/test_asynchttpgetter.py
Normal file
117
hassio-google-drive-backup/tests/test_asynchttpgetter.py
Normal file
@@ -0,0 +1,117 @@
|
||||
from datetime import timedelta
|
||||
import pytest
|
||||
from aiohttp import ClientSession
|
||||
from aiohttp.web import StreamResponse
|
||||
from .conftest import Uploader
|
||||
from backup.exceptions import LogicError
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from .conftest import FakeTime
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basics(uploader: Uploader, server, session: ClientSession):
|
||||
getter = await uploader.upload(bytearray([0, 1, 2, 3, 4, 5, 6, 7]))
|
||||
await getter.setup()
|
||||
assert (await getter.read(1)).read() == bytearray([0])
|
||||
assert (await getter.read(2)).read() == bytearray([1, 2])
|
||||
assert (await getter.read(3)).read() == bytearray([3, 4, 5])
|
||||
assert (await getter.read(3)).read() == bytearray([6, 7])
|
||||
assert (await getter.read(3)).read() == bytearray([])
|
||||
assert (await getter.read(3)).read() == bytearray([])
|
||||
|
||||
getter.position(2)
|
||||
assert (await getter.read(2)).read() == bytearray([2, 3])
|
||||
assert (await getter.read(3)).read() == bytearray([4, 5, 6])
|
||||
|
||||
getter.position(2)
|
||||
assert (await getter.read(2)).read() == bytearray([2, 3])
|
||||
|
||||
getter.position(2)
|
||||
assert (await getter.read(2)).read() == bytearray([2, 3])
|
||||
assert (await getter.read(100)).read() == bytearray([4, 5, 6, 7])
|
||||
assert (await getter.read(3)).read() == bytearray([])
|
||||
assert (await getter.read(3)).read() == bytearray([])
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_position_error(uploader: Uploader, server):
|
||||
getter = await uploader.upload(bytearray([0, 1, 2, 3, 4, 5, 6, 7]))
|
||||
await getter.setup()
|
||||
assert (await getter.read(1)).read() == bytearray([0])
|
||||
|
||||
with pytest.raises(LogicError):
|
||||
await getter.setup()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_content_length(uploader: Uploader, server, interceptor: RequestInterceptor):
|
||||
getter = await uploader.upload(bytearray([0, 1, 2, 3, 4, 5, 6, 7]))
|
||||
intercept = interceptor.setError("/readfile")
|
||||
intercept.addResponse(StreamResponse(headers={}))
|
||||
with pytest.raises(LogicError) as e:
|
||||
await getter.setup()
|
||||
assert e.value.message() == "Content size must be provided if the webserver doesn't provide it"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_setup_error(uploader: Uploader, server):
|
||||
getter = await uploader.upload(bytearray([0, 1, 2, 3, 4, 5, 6, 7]))
|
||||
with pytest.raises(LogicError):
|
||||
await getter.read(1)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_progress(uploader: Uploader, server):
|
||||
getter = await uploader.upload(bytearray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))
|
||||
await getter.setup()
|
||||
assert getter.progress() == 0
|
||||
assert (await getter.read(1)).read() == bytearray([0])
|
||||
assert getter.progress() == 10
|
||||
assert (await getter.read(2)).read() == bytearray([1, 2])
|
||||
assert getter.progress() == 30
|
||||
assert (await getter.read(7)).read() == bytearray([3, 4, 5, 6, 7, 8, 9])
|
||||
assert getter.progress() == 100
|
||||
assert str.format("{0}", getter) == "100"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_speed(uploader: Uploader, server, time: FakeTime):
|
||||
getter = await uploader.upload(bytearray(x for x in range(0, 100)))
|
||||
assert getter.startTime() == time.now()
|
||||
await getter.setup()
|
||||
assert getter.speed(period=timedelta(seconds=10)) is None
|
||||
time.advance(seconds=1)
|
||||
await getter.read(1)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
|
||||
time.advance(seconds=1)
|
||||
await getter.read(1)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=1)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=1.5)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=0.5)) == 1
|
||||
|
||||
time.advance(seconds=1)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=1)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=1.5)) == 1
|
||||
time.advance(seconds=0.5)
|
||||
assert getter.speed(period=timedelta(seconds=1)) == 0.5
|
||||
time.advance(seconds=0.5)
|
||||
assert getter.speed(period=timedelta(seconds=1)) == 0
|
||||
|
||||
# Now 4 seconds have passed, and we've transferred 4 bytes
|
||||
await getter.read(2)
|
||||
assert getter.speed(period=timedelta(seconds=4)) == 1
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
|
||||
time.advance(seconds=10)
|
||||
await getter.read(10)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
|
||||
time.advance(seconds=10)
|
||||
await getter.read(20)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 2
|
||||
time.advance(seconds=10)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 2
|
||||
time.advance(seconds=5)
|
||||
assert getter.speed(period=timedelta(seconds=10)) == 1
|
||||
104
hassio-google-drive-backup/tests/test_authcodequery.py
Normal file
104
hassio-google-drive-backup/tests/test_authcodequery.py
Normal file
@@ -0,0 +1,104 @@
|
||||
import pytest
|
||||
|
||||
from backup.drive import AuthCodeQuery
|
||||
from backup.exceptions import LogicError, GoogleCredGenerateError, ProtocolError
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from dev.simulated_google import URL_MATCH_TOKEN, SimulatedGoogle, URL_MATCH_DEVICE_CODE
|
||||
from aiohttp.web_response import json_response
|
||||
from backup.config import Config, Setting
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_invalid_sequence(device_code: AuthCodeQuery, interceptor: RequestInterceptor) -> None:
|
||||
with pytest.raises(LogicError):
|
||||
await device_code.waitForPermission()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_success(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
google._device_code_accepted = True
|
||||
assert await device_code.waitForPermission() is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_google_failure_on_request(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
interceptor.setError(URL_MATCH_DEVICE_CODE, 458)
|
||||
with pytest.raises(GoogleCredGenerateError) as error:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
assert error.value.message() == "Google responded with error status HTTP 458. Please verify your credentials are set up correctly."
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_failure_on_http_unknown(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
interceptor.setError(URL_MATCH_TOKEN, 500)
|
||||
|
||||
with pytest.raises(GoogleCredGenerateError) as error:
|
||||
await device_code.waitForPermission()
|
||||
assert error.value.message() == "Failed unexpectedly while trying to reach Google. See the add-on logs for details."
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_success_after_wait(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
match = interceptor.setError(URL_MATCH_TOKEN)
|
||||
match.addResponse(json_response(data={'error': "slow_down"}, status=403))
|
||||
|
||||
google._device_code_accepted = True
|
||||
await device_code.waitForPermission()
|
||||
|
||||
assert match.callCount() == 2
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_success_after_428(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
match = interceptor.setError(URL_MATCH_TOKEN)
|
||||
match.addResponse(json_response(data={}, status=428))
|
||||
match.addResponse(json_response(data={}, status=428))
|
||||
match.addResponse(json_response(data={}, status=428))
|
||||
match.addResponse(json_response(data={}, status=428))
|
||||
match.addResponse(json_response(data={}, status=428))
|
||||
|
||||
google._device_code_accepted = True
|
||||
await device_code.waitForPermission()
|
||||
|
||||
assert match.callCount() == 6
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_permission_failure(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
match = interceptor.setError(URL_MATCH_TOKEN)
|
||||
match.addResponse(json_response(data={}, status=403))
|
||||
|
||||
google._device_code_accepted = False
|
||||
with pytest.raises(GoogleCredGenerateError) as error:
|
||||
await device_code.waitForPermission()
|
||||
assert error.value.message() == "Google refused the request to connect your account, either because you rejected it or they were set up incorrectly."
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_json_parse_failure(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
interceptor.setError(URL_MATCH_TOKEN, 200)
|
||||
|
||||
with pytest.raises(ProtocolError):
|
||||
await device_code.waitForPermission()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_repeated_failure(device_code: AuthCodeQuery, interceptor: RequestInterceptor, google: SimulatedGoogle, server, config: Config) -> None:
|
||||
await device_code.requestCredentials(google._custom_drive_client_id, google._custom_drive_client_secret)
|
||||
|
||||
config.override(Setting.DRIVE_TOKEN_URL, "http://go.nowhere")
|
||||
with pytest.raises(GoogleCredGenerateError) as error:
|
||||
await device_code.waitForPermission()
|
||||
error.value.message() == "Failed unexpectedly too many times while attempting to reach Google. See the logs for details."
|
||||
104
hassio-google-drive-backup/tests/test_backoff.py
Normal file
104
hassio-google-drive-backup/tests/test_backoff.py
Normal file
@@ -0,0 +1,104 @@
|
||||
from pytest import fixture, raises
|
||||
|
||||
from backup.util import Backoff
|
||||
|
||||
|
||||
@fixture
|
||||
def error():
|
||||
return Exception()
|
||||
|
||||
|
||||
def test_defaults(error):
|
||||
backoff = Backoff()
|
||||
assert backoff.backoff(error) == 2
|
||||
assert backoff.backoff(error) == 4
|
||||
assert backoff.backoff(error) == 8
|
||||
assert backoff.backoff(error) == 16
|
||||
assert backoff.backoff(error) == 32
|
||||
assert backoff.backoff(error) == 64
|
||||
assert backoff.backoff(error) == 128
|
||||
assert backoff.backoff(error) == 256
|
||||
assert backoff.backoff(error) == 512
|
||||
assert backoff.backoff(error) == 1024
|
||||
assert backoff.backoff(error) == 2048
|
||||
|
||||
for x in range(10000):
|
||||
assert backoff.backoff(error) == 3600
|
||||
|
||||
|
||||
def test_max(error):
|
||||
backoff = Backoff(max=500)
|
||||
assert backoff.backoff(error) == 2
|
||||
assert backoff.backoff(error) == 4
|
||||
assert backoff.backoff(error) == 8
|
||||
assert backoff.backoff(error) == 16
|
||||
assert backoff.backoff(error) == 32
|
||||
assert backoff.backoff(error) == 64
|
||||
assert backoff.backoff(error) == 128
|
||||
assert backoff.backoff(error) == 256
|
||||
|
||||
for x in range(10000):
|
||||
assert backoff.backoff(error) == 500
|
||||
|
||||
|
||||
def test_initial(error):
|
||||
backoff = Backoff(initial=0)
|
||||
assert backoff.backoff(error) == 0
|
||||
assert backoff.backoff(error) == 2
|
||||
assert backoff.backoff(error) == 4
|
||||
assert backoff.backoff(error) == 8
|
||||
assert backoff.backoff(error) == 16
|
||||
assert backoff.backoff(error) == 32
|
||||
assert backoff.backoff(error) == 64
|
||||
assert backoff.backoff(error) == 128
|
||||
assert backoff.backoff(error) == 256
|
||||
assert backoff.backoff(error) == 512
|
||||
assert backoff.backoff(error) == 1024
|
||||
assert backoff.backoff(error) == 2048
|
||||
|
||||
for x in range(10000):
|
||||
assert backoff.backoff(error) == 3600
|
||||
|
||||
|
||||
def test_attempts(error):
|
||||
backoff = Backoff(attempts=5)
|
||||
assert backoff.backoff(error) == 2
|
||||
assert backoff.backoff(error) == 4
|
||||
assert backoff.backoff(error) == 8
|
||||
assert backoff.backoff(error) == 16
|
||||
assert backoff.backoff(error) == 32
|
||||
|
||||
for x in range(5):
|
||||
with raises(type(error)):
|
||||
backoff.backoff(error)
|
||||
|
||||
|
||||
def test_start(error):
|
||||
backoff = Backoff(base=10)
|
||||
assert backoff.backoff(error) == 10
|
||||
assert backoff.backoff(error) == 20
|
||||
assert backoff.backoff(error) == 40
|
||||
assert backoff.backoff(error) == 80
|
||||
|
||||
|
||||
def test_realistic(error):
|
||||
backoff = Backoff(base=5, initial=0, exp=1.5, attempts=5)
|
||||
assert backoff.backoff(error) == 0
|
||||
assert backoff.backoff(error) == 5
|
||||
assert backoff.backoff(error) == 5 * 1.5
|
||||
assert backoff.backoff(error) == 5 * (1.5**2)
|
||||
assert backoff.backoff(error) == 5 * (1.5**3)
|
||||
for x in range(5):
|
||||
with raises(type(error)):
|
||||
backoff.backoff(error)
|
||||
|
||||
|
||||
def test_maxOut(error):
|
||||
backoff = Backoff(base=10, max=100)
|
||||
assert backoff.backoff(error) == 10
|
||||
assert backoff.backoff(error) == 20
|
||||
backoff.maxOut()
|
||||
assert backoff.backoff(error) == 100
|
||||
assert backoff.backoff(error) == 100
|
||||
backoff.reset()
|
||||
assert backoff.backoff(error) == 10
|
||||
129
hassio-google-drive-backup/tests/test_bytesizeasstring.py
Normal file
129
hassio-google-drive-backup/tests/test_bytesizeasstring.py
Normal file
@@ -0,0 +1,129 @@
|
||||
from backup.config import BytesizeAsStringValidator
|
||||
from backup.exceptions import InvalidConfigurationValue
|
||||
import pytest
|
||||
|
||||
|
||||
def test_minimum():
|
||||
parser = BytesizeAsStringValidator("test", minimum=10)
|
||||
assert parser.validate("11 bytes") == 11
|
||||
assert parser.validate(11) == 11
|
||||
with pytest.raises(InvalidConfigurationValue):
|
||||
parser.validate("9 bytes")
|
||||
|
||||
|
||||
def test_maximum():
|
||||
parser = BytesizeAsStringValidator("test", maximum=10)
|
||||
assert parser.validate("9 bytes") == 9
|
||||
assert parser.validate(9) == 9
|
||||
with pytest.raises(InvalidConfigurationValue):
|
||||
parser.validate("11 bytes")
|
||||
assert parser.formatForUi(9) == "9 B"
|
||||
|
||||
|
||||
def test_ui_format():
|
||||
parser = BytesizeAsStringValidator("test")
|
||||
assert parser.formatForUi(25) == "25 B"
|
||||
assert parser.formatForUi(25 * 1024) == "25 KB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024) == "25 MB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024) == "25 GB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024 * 1024) == "25 TB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024 * 1024 * 1024) == "25 PB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024) == "25 EB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024) == "25 ZB"
|
||||
assert parser.formatForUi(25 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024) == "25 YB"
|
||||
assert parser.formatForUi(2000 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024) == "2000 YB"
|
||||
|
||||
assert parser.formatForUi(2.5 * 1024 * 1024) == "2.5 MB"
|
||||
assert parser.formatForUi(2.534525 * 1024 * 1024) == "2.534525 MB"
|
||||
assert parser.formatForUi(98743.1234 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024) == "98743.1234 YB"
|
||||
|
||||
assert parser.formatForUi(None) == ""
|
||||
assert parser.formatForUi("") == ""
|
||||
assert parser.formatForUi(0) == ""
|
||||
|
||||
|
||||
def test_numbers():
|
||||
parser = BytesizeAsStringValidator("test")
|
||||
parser.validate(1.2) == 1
|
||||
parser.validate(1024.9) == 1024
|
||||
parser.validate(1024) == 1024
|
||||
|
||||
|
||||
def test_parsing():
|
||||
parser = BytesizeAsStringValidator("test")
|
||||
assert parser.validate("1 B") == 1
|
||||
assert parser.validate("1 b") == 1
|
||||
assert parser.validate("1 bytes") == 1
|
||||
assert parser.validate("1 byte") == 1
|
||||
assert parser.validate("") is None
|
||||
assert parser.validate(" ") is None
|
||||
assert parser.validate(" 5. bytes ") == 5
|
||||
assert parser.validate("10b") == 10
|
||||
|
||||
assert parser.validate("1 KB") == 1024
|
||||
assert parser.validate("1 k") == 1024
|
||||
assert parser.validate("1 kb") == 1024
|
||||
assert parser.validate("1 kilobytes") == 1024
|
||||
assert parser.validate("1 kibibytes") == 1024
|
||||
assert parser.validate("1 kibi") == 1024
|
||||
assert parser.validate("2.5 KB") == 1024 * 2.5
|
||||
assert parser.validate("10k") == 10 * 1024
|
||||
|
||||
assert parser.validate("1 MB") == 1024 * 1024
|
||||
assert parser.validate("1 m") == 1024 * 1024
|
||||
assert parser.validate("1 mb") == 1024 * 1024
|
||||
assert parser.validate("1 megs") == 1024 * 1024
|
||||
assert parser.validate("1 mega") == 1024 * 1024
|
||||
assert parser.validate("1 megabytes") == 1024 * 1024
|
||||
assert parser.validate("1 mebibytes") == 1024 * 1024
|
||||
assert parser.validate("10m") == 10 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 GB") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 g") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 gb") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 gigs") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 gig") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 giga") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 gigabytes") == 1024 * 1024 * 1024
|
||||
assert parser.validate("1 gibibytes") == 1024 * 1024 * 1024
|
||||
assert parser.validate("10G") == 10 * 1024 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 TB") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 t") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 tb") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 tera") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 tebi") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 terabytes") == 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("10T") == 10 * 1024 * 1024 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 PB") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 p") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 pb") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 peta") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 pebi") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 petabytes") == 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("10P") == 10 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 EB") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 e") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 eb") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 exa") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 exbi") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 exabytes") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("10E") == 10 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 ZB") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 z") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 zb") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 zetta") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 zebi") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 zettabytes") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("10Z") == 10 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
|
||||
assert parser.validate("1 YB") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 y") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 yb") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 yotta") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 yobi") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("1 yottabytes") == 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
assert parser.validate("10Y") == 10 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024
|
||||
307
hassio-google-drive-backup/tests/test_config.py
Normal file
307
hassio-google-drive-backup/tests/test_config.py
Normal file
@@ -0,0 +1,307 @@
|
||||
import os
|
||||
from pytest import raises
|
||||
|
||||
from backup.model import GenConfig
|
||||
from backup.config import Config, Setting
|
||||
from backup.exceptions import InvalidConfigurationValue
|
||||
|
||||
|
||||
def test_validate_empty():
|
||||
config = Config()
|
||||
assert config.validate({}) == defaultAnd()
|
||||
|
||||
|
||||
def test_validate_int():
|
||||
assert Config().validate({'max_backups_in_ha': 5}) == defaultAnd(
|
||||
{Setting.MAX_BACKUPS_IN_HA: 5})
|
||||
assert Config().validate({'max_backups_in_ha': 5.0}) == defaultAnd(
|
||||
{Setting.MAX_BACKUPS_IN_HA: 5})
|
||||
assert Config().validate({'max_backups_in_ha': "5"}) == defaultAnd(
|
||||
{Setting.MAX_BACKUPS_IN_HA: 5})
|
||||
|
||||
with raises(InvalidConfigurationValue):
|
||||
Config().validate({'max_backups_in_ha': -2})
|
||||
|
||||
|
||||
def test_validate_float():
|
||||
setting = Setting.DAYS_BETWEEN_BACKUPS
|
||||
assert Config().validate({setting: 5}) == defaultAnd({setting: 5})
|
||||
assert Config().validate({setting.key(): 5}) == defaultAnd({setting: 5})
|
||||
assert Config().validate({setting: 5.0}) == defaultAnd({setting: 5})
|
||||
assert Config().validate({setting: "5"}) == defaultAnd({setting: 5})
|
||||
|
||||
with raises(InvalidConfigurationValue):
|
||||
Config().validate({'days_between_backups': -1})
|
||||
|
||||
|
||||
def test_validate_bool():
|
||||
setting = Setting.SEND_ERROR_REPORTS
|
||||
assert Config().validate({setting: True}) == defaultAnd({setting: True})
|
||||
assert Config().validate({setting: False}) == defaultAnd({setting: False})
|
||||
assert Config().validate({setting: "true"}) == defaultAnd({setting: True})
|
||||
assert Config().validate({setting: "false"}) == defaultAnd({setting: False})
|
||||
assert Config().validate({setting: "1"}) == defaultAnd({setting: True})
|
||||
assert Config().validate({setting: "0"}) == defaultAnd({setting: False})
|
||||
assert Config().validate({setting: "yes"}) == defaultAnd({setting: True})
|
||||
assert Config().validate({setting: "no"}) == defaultAnd({setting: False})
|
||||
assert Config().validate({setting: "on"}) == defaultAnd({setting: True})
|
||||
assert Config().validate({setting: "off"}) == defaultAnd({setting: False})
|
||||
|
||||
|
||||
def test_validate_string():
|
||||
assert Config().validate({Setting.BACKUP_NAME: True}) == defaultAnd({Setting.BACKUP_NAME: "True"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: False}) == defaultAnd({Setting.BACKUP_NAME: "False"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "true"}) == defaultAnd({Setting.BACKUP_NAME: "true"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "false"}) == defaultAnd({Setting.BACKUP_NAME: "false"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "1"}) == defaultAnd({Setting.BACKUP_NAME: "1"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "0"}) == defaultAnd({Setting.BACKUP_NAME: "0"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "yes"}) == defaultAnd({Setting.BACKUP_NAME: "yes"})
|
||||
assert Config().validate({Setting.BACKUP_NAME: "no"}) == defaultAnd({Setting.BACKUP_NAME: "no"})
|
||||
|
||||
|
||||
def test_validate_url():
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: True}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "True"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: False}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "False"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "true"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "true"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "false"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "false"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "1"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "1"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "0"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "0"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "yes"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "yes"})
|
||||
assert Config().validate({Setting.SUPERVISOR_URL: "no"}) == defaultAnd(
|
||||
{Setting.SUPERVISOR_URL: "no"})
|
||||
|
||||
|
||||
def test_validate_regex():
|
||||
assert Config().validate({Setting.DRIVE_IPV4: "192.168.1.1"}) == defaultAnd(
|
||||
{Setting.DRIVE_IPV4: "192.168.1.1"})
|
||||
with raises(InvalidConfigurationValue):
|
||||
Config().validate({Setting.DRIVE_IPV4: -1})
|
||||
with raises(InvalidConfigurationValue):
|
||||
Config().validate({Setting.DRIVE_IPV4: "192.168.1"})
|
||||
|
||||
|
||||
def test_remove_ssl():
|
||||
assert Config().validate({Setting.USE_SSL: True}) == defaultAnd({Setting.USE_SSL: True})
|
||||
assert Config().validate({Setting.USE_SSL: False}) == defaultAnd()
|
||||
assert Config().validate({
|
||||
Setting.USE_SSL: False,
|
||||
Setting.CERTFILE: "removed",
|
||||
Setting.KEYFILE: 'removed'
|
||||
}) == defaultAnd()
|
||||
assert Config().validate({
|
||||
Setting.USE_SSL: True,
|
||||
Setting.CERTFILE: "kept",
|
||||
Setting.KEYFILE: 'kept'
|
||||
}) == defaultAnd({
|
||||
Setting.USE_SSL: True,
|
||||
Setting.CERTFILE: "kept",
|
||||
Setting.KEYFILE: 'kept'
|
||||
})
|
||||
|
||||
|
||||
def test_send_error_reports():
|
||||
assert Config().validate({Setting.SEND_ERROR_REPORTS: False}) == defaultAnd(
|
||||
{Setting.SEND_ERROR_REPORTS: False})
|
||||
assert Config().validate({Setting.SEND_ERROR_REPORTS: True}) == defaultAnd(
|
||||
{Setting.SEND_ERROR_REPORTS: True})
|
||||
assert Config().validate(
|
||||
{Setting.SEND_ERROR_REPORTS: None}) == defaultAnd()
|
||||
|
||||
|
||||
def test_unrecognized_values_filter():
|
||||
assert Config().validate({'blah': "bloo"}) == defaultAnd()
|
||||
|
||||
|
||||
def test_removes_defaults():
|
||||
assert Config().validate(
|
||||
{Setting.BACKUP_TIME_OF_DAY: ""}) == defaultAnd()
|
||||
|
||||
|
||||
def defaultAnd(config={}):
|
||||
ret = {
|
||||
Setting.DAYS_BETWEEN_BACKUPS: 3,
|
||||
Setting.MAX_BACKUPS_IN_HA: 4,
|
||||
Setting.MAX_BACKUPS_IN_GOOGLE_DRIVE: 4
|
||||
}
|
||||
ret.update(config)
|
||||
return (ret, False)
|
||||
|
||||
|
||||
def test_GenerationalConfig() -> None:
|
||||
assert Config().getGenerationalConfig() is None
|
||||
|
||||
assert Config().override(Setting.GENERATIONAL_DAYS, 5).getGenerationalConfig() == GenConfig(days=5)
|
||||
assert Config().override(Setting.GENERATIONAL_WEEKS, 3).getGenerationalConfig() == GenConfig(days=1, weeks=3)
|
||||
assert Config().override(Setting.GENERATIONAL_MONTHS, 3).getGenerationalConfig() == GenConfig(days=1, months=3)
|
||||
assert Config().override(Setting.GENERATIONAL_YEARS, 3).getGenerationalConfig() == GenConfig(days=1, years=3)
|
||||
assert Config().override(Setting.GENERATIONAL_DELETE_EARLY, True).override(
|
||||
Setting.GENERATIONAL_DAYS, 2).getGenerationalConfig() == GenConfig(days=2, aggressive=True)
|
||||
assert Config().override(Setting.GENERATIONAL_DAYS, 1).override(
|
||||
Setting.GENERATIONAL_DAY_OF_YEAR, 3).getGenerationalConfig() == GenConfig(days=1, day_of_year=3)
|
||||
assert Config().override(Setting.GENERATIONAL_DAYS, 1).override(
|
||||
Setting.GENERATIONAL_DAY_OF_MONTH, 3).getGenerationalConfig() == GenConfig(days=1, day_of_month=3)
|
||||
assert Config().override(Setting.GENERATIONAL_DAYS, 1).override(
|
||||
Setting.GENERATIONAL_DAY_OF_WEEK, "tue").getGenerationalConfig() == GenConfig(days=1, day_of_week="tue")
|
||||
|
||||
assert Config().override(Setting.GENERATIONAL_DAY_OF_MONTH, 3).override(Setting.GENERATIONAL_DAY_OF_WEEK, "tue").override(Setting.GENERATIONAL_DAY_OF_YEAR, "4").getGenerationalConfig() is None
|
||||
|
||||
|
||||
def test_from_environment():
|
||||
assert Config.fromEnvironment().get(Setting.PORT) != 1000
|
||||
|
||||
os.environ["PORT"] = str(1000)
|
||||
assert Config.fromEnvironment().get(Setting.PORT) == 1000
|
||||
|
||||
del os.environ["PORT"]
|
||||
assert Config.fromEnvironment().get(Setting.PORT) != 1000
|
||||
|
||||
os.environ["port"] = str(1000)
|
||||
assert Config.fromEnvironment().get(Setting.PORT) == 1000
|
||||
|
||||
|
||||
def test_config_upgrade():
|
||||
# Test specifying one value
|
||||
config = Config()
|
||||
config.update({Setting.DEPRECTAED_BACKUP_TIME_OF_DAY: "00:01"})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.BACKUP_TIME_OF_DAY: "00:01",
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
# Test specifying multiple values
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_GOOGLE_DRIVE: 21,
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: 20,
|
||||
Setting.DEPRECATED_BACKUP_PASSWORD: "boop"
|
||||
})
|
||||
assert config.getAllConfig() == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: 20,
|
||||
Setting.MAX_BACKUPS_IN_GOOGLE_DRIVE: 21,
|
||||
Setting.BACKUP_PASSWORD: "boop",
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})[0]
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
# test specifying value that don't get upgraded
|
||||
config = Config()
|
||||
config.update({Setting.EXCLUDE_ADDONS: "test"})
|
||||
assert config.getAllConfig() == defaultAnd({
|
||||
Setting.EXCLUDE_ADDONS: "test"
|
||||
})[0]
|
||||
assert not config.mustSaveUpgradeChanges()
|
||||
|
||||
# Test specifying both
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.DEPRECTAED_BACKUP_TIME_OF_DAY: "00:01",
|
||||
Setting.EXCLUDE_ADDONS: "test"
|
||||
})
|
||||
assert config.getAllConfig() == defaultAnd({
|
||||
Setting.BACKUP_TIME_OF_DAY: "00:01",
|
||||
Setting.EXCLUDE_ADDONS: "test",
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})[0]
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
|
||||
def test_overwrite_on_upgrade():
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: 5,
|
||||
Setting.MAX_BACKUPS_IN_HA: 6
|
||||
})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: 6,
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.MAX_BACKUPS_IN_HA: 6,
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: 5
|
||||
})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: 6,
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.MAX_BACKUPS_IN_HA: 6,
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: 4
|
||||
})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: 6,
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
|
||||
def test_overwrite_on_upgrade_default_value():
|
||||
# Test specifying one value
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default() + 1,
|
||||
Setting.MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default()
|
||||
})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default() + 1,
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
config = Config()
|
||||
config.update({
|
||||
Setting.MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default(),
|
||||
Setting.DEPRECTAED_MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default() + 1
|
||||
})
|
||||
assert (config.getAllConfig(), False) == defaultAnd({
|
||||
Setting.MAX_BACKUPS_IN_HA: Setting.MAX_BACKUPS_IN_HA.default() + 1,
|
||||
Setting.CALL_BACKUP_SNAPSHOT: True
|
||||
})
|
||||
assert config.mustSaveUpgradeChanges()
|
||||
|
||||
|
||||
def test_empty_colors():
|
||||
# Test specifying one value
|
||||
config = Config()
|
||||
config.update({Setting.BACKGROUND_COLOR: "", Setting.ACCENT_COLOR: ""})
|
||||
assert config.get(Setting.BACKGROUND_COLOR) == Setting.BACKGROUND_COLOR.default()
|
||||
assert config.get(Setting.ACCENT_COLOR) == Setting.ACCENT_COLOR.default()
|
||||
|
||||
|
||||
def test_ignore_upgrades_default():
|
||||
# Test specifying one value
|
||||
config = Config()
|
||||
assert config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
config.useLegacyIgnoredBehavior(True)
|
||||
assert not config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
config.useLegacyIgnoredBehavior(False)
|
||||
assert config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
|
||||
def getGenConfig(update):
|
||||
base = {
|
||||
"days": 1,
|
||||
"weeks": 0,
|
||||
"months": 0,
|
||||
"years": 0,
|
||||
"day_of_week": "mon",
|
||||
"day_of_year": 1,
|
||||
"day_of_month": 1
|
||||
}
|
||||
base.update(update)
|
||||
return base
|
||||
552
hassio-google-drive-backup/tests/test_coordinator.py
Normal file
552
hassio-google-drive-backup/tests/test_coordinator.py
Normal file
@@ -0,0 +1,552 @@
|
||||
import asyncio
|
||||
from datetime import timedelta
|
||||
|
||||
import pytest
|
||||
from pytest import raises
|
||||
|
||||
from backup.config import Config, Setting, CreateOptions
|
||||
from backup.exceptions import LogicError, LowSpaceError, NoBackup, PleaseWait, UserCancelledError
|
||||
from backup.util import GlobalInfo, DataCache
|
||||
from backup.model import Coordinator, Model, Backup, DestinationPrecache
|
||||
from .conftest import FsFaker
|
||||
from .faketime import FakeTime
|
||||
from .helpers import HelperTestSource, skipForWindows
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def source():
|
||||
return HelperTestSource("Source")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def dest():
|
||||
return HelperTestSource("Dest")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def simple_config():
|
||||
config = Config()
|
||||
config.override(Setting.BACKUP_STARTUP_DELAY_MINUTES, 0)
|
||||
return config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def model(source, dest, time, simple_config, global_info, estimator, data_cache: DataCache):
|
||||
return Model(simple_config, time, source, dest, global_info, estimator, data_cache)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def coord(model, time, simple_config, global_info, estimator):
|
||||
return Coordinator(model, time, simple_config, global_info, estimator)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def precache(coord, time, dest, simple_config):
|
||||
return DestinationPrecache(coord, time, dest, simple_config)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_enabled(coord: Coordinator, dest, time):
|
||||
dest.setEnabled(True)
|
||||
assert coord.enabled()
|
||||
dest.setEnabled(False)
|
||||
assert not coord.enabled()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sync(coord: Coordinator, global_info: GlobalInfo, time: FakeTime):
|
||||
await coord.sync()
|
||||
assert global_info._syncs == 1
|
||||
assert global_info._successes == 1
|
||||
assert global_info._last_sync_start == time.now()
|
||||
assert len(coord.backups()) == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_blocking(coord: Coordinator):
|
||||
# This just makes sure the wait thread is blocked while we do stuff
|
||||
event_start = asyncio.Event()
|
||||
event_end = asyncio.Event()
|
||||
asyncio.create_task(coord._withSoftLock(lambda: sleepHelper(event_start, event_end)))
|
||||
await event_start.wait()
|
||||
|
||||
# Make sure PleaseWait gets called on these
|
||||
with raises(PleaseWait):
|
||||
await coord.delete(None, None)
|
||||
with raises(PleaseWait):
|
||||
await coord.sync()
|
||||
with raises(PleaseWait):
|
||||
await coord.uploadBackups(None)
|
||||
with raises(PleaseWait):
|
||||
await coord.startBackup(None)
|
||||
event_end.set()
|
||||
|
||||
|
||||
async def sleepHelper(event_start: asyncio.Event, event_end: asyncio.Event):
|
||||
event_start.set()
|
||||
await event_end.wait()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_new_backup(coord: Coordinator, time: FakeTime, source, dest):
|
||||
await coord.startBackup(CreateOptions(time.now(), "Test Name"))
|
||||
backups = coord.backups()
|
||||
assert len(backups) == 1
|
||||
assert backups[0].name() == "Test Name"
|
||||
assert backups[0].getSource(source.name()) is not None
|
||||
assert backups[0].getSource(dest.name()) is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sync_error(coord: Coordinator, global_info: GlobalInfo, time: FakeTime, model):
|
||||
error = Exception("BOOM")
|
||||
old_sync = model.sync
|
||||
model.sync = lambda s: doRaise(error)
|
||||
await coord.sync()
|
||||
assert global_info._last_error is error
|
||||
assert global_info._last_failure_time == time.now()
|
||||
assert global_info._successes == 0
|
||||
model.sync = old_sync
|
||||
await coord.sync()
|
||||
assert global_info._last_error is None
|
||||
assert global_info._successes == 1
|
||||
assert global_info._last_success == time.now()
|
||||
await coord.sync()
|
||||
|
||||
|
||||
def doRaise(error):
|
||||
raise error
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete(coord: Coordinator, backup, source, dest):
|
||||
assert backup.getSource(source.name()) is not None
|
||||
assert backup.getSource(dest.name()) is not None
|
||||
await coord.delete([source.name()], backup.slug())
|
||||
assert len(coord.backups()) == 1
|
||||
assert backup.getSource(source.name()) is None
|
||||
assert backup.getSource(dest.name()) is not None
|
||||
await coord.delete([dest.name()], backup.slug())
|
||||
assert backup.getSource(source.name()) is None
|
||||
assert backup.getSource(dest.name()) is None
|
||||
assert backup.isDeleted()
|
||||
assert len(coord.backups()) == 0
|
||||
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
await coord.delete([source.name(), dest.name()], coord.backups()[0].slug())
|
||||
assert len(coord.backups()) == 0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_errors(coord: Coordinator, source, dest, backup):
|
||||
with raises(NoBackup):
|
||||
await coord.delete([source.name()], "badslug")
|
||||
bad_source = HelperTestSource("bad")
|
||||
with raises(NoBackup):
|
||||
await coord.delete([bad_source.name()], backup.slug())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retain(coord: Coordinator, source, dest, backup):
|
||||
assert not backup.getSource(source.name()).retained()
|
||||
assert not backup.getSource(dest.name()).retained()
|
||||
await coord.retain({
|
||||
source.name(): True,
|
||||
dest.name(): True
|
||||
}, backup.slug())
|
||||
assert backup.getSource(source.name()).retained()
|
||||
assert backup.getSource(dest.name()).retained()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retain_errors(coord: Coordinator, source, dest, backup):
|
||||
with raises(NoBackup):
|
||||
await coord.retain({source.name(): True}, "badslug")
|
||||
bad_source = HelperTestSource("bad")
|
||||
with raises(NoBackup):
|
||||
await coord.delete({bad_source.name(): True}, backup.slug())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_freshness(coord: Coordinator, source: HelperTestSource, dest: HelperTestSource, backup: Backup, time: FakeTime):
|
||||
source.setMax(2)
|
||||
dest.setMax(2)
|
||||
await coord.sync()
|
||||
assert backup.getPurges() == {
|
||||
source.name(): False,
|
||||
dest.name(): False
|
||||
}
|
||||
|
||||
source.setMax(1)
|
||||
dest.setMax(1)
|
||||
await coord.sync()
|
||||
assert backup.getPurges() == {
|
||||
source.name(): True,
|
||||
dest.name(): True
|
||||
}
|
||||
|
||||
dest.setMax(0)
|
||||
await coord.sync()
|
||||
assert backup.getPurges() == {
|
||||
source.name(): True,
|
||||
dest.name(): False
|
||||
}
|
||||
|
||||
source.setMax(0)
|
||||
await coord.sync()
|
||||
assert backup.getPurges() == {
|
||||
source.name(): False,
|
||||
dest.name(): False
|
||||
}
|
||||
|
||||
source.setMax(2)
|
||||
dest.setMax(2)
|
||||
time.advance(days=7)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 2
|
||||
assert backup.getPurges() == {
|
||||
source.name(): True,
|
||||
dest.name(): True
|
||||
}
|
||||
assert coord.backups()[1].getPurges() == {
|
||||
source.name(): False,
|
||||
dest.name(): False
|
||||
}
|
||||
|
||||
# should refresh on delete
|
||||
source.setMax(1)
|
||||
dest.setMax(1)
|
||||
await coord.delete([source.name()], backup.slug())
|
||||
assert coord.backups()[0].getPurges() == {
|
||||
dest.name(): True
|
||||
}
|
||||
assert coord.backups()[1].getPurges() == {
|
||||
source.name(): True,
|
||||
dest.name(): False
|
||||
}
|
||||
|
||||
# should update on retain
|
||||
await coord.retain({dest.name(): True}, backup.slug())
|
||||
assert coord.backups()[0].getPurges() == {
|
||||
dest.name(): False
|
||||
}
|
||||
assert coord.backups()[1].getPurges() == {
|
||||
source.name(): True,
|
||||
dest.name(): True
|
||||
}
|
||||
|
||||
# should update on upload
|
||||
await coord.uploadBackups(coord.backups()[0].slug())
|
||||
assert coord.backups()[0].getPurges() == {
|
||||
dest.name(): False,
|
||||
source.name(): True
|
||||
}
|
||||
assert coord.backups()[1].getPurges() == {
|
||||
source.name(): False,
|
||||
dest.name(): True
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_upload(coord: Coordinator, source: HelperTestSource, dest: HelperTestSource, backup):
|
||||
await coord.delete([source.name()], backup.slug())
|
||||
assert backup.getSource(source.name()) is None
|
||||
await coord.uploadBackups(backup.slug())
|
||||
assert backup.getSource(source.name()) is not None
|
||||
|
||||
with raises(LogicError):
|
||||
await coord.uploadBackups(backup.slug())
|
||||
|
||||
with raises(NoBackup):
|
||||
await coord.uploadBackups("bad slug")
|
||||
|
||||
await coord.delete([dest.name()], backup.slug())
|
||||
with raises(NoBackup):
|
||||
await coord.uploadBackups(backup.slug())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_download(coord: Coordinator, source, dest, backup):
|
||||
await coord.download(backup.slug())
|
||||
await coord.delete([source.name()], backup.slug())
|
||||
await coord.download(backup.slug())
|
||||
|
||||
with raises(NoBackup):
|
||||
await coord.download("bad slug")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_backoff(coord: Coordinator, model, source: HelperTestSource, dest: HelperTestSource, backup, time: FakeTime, simple_config: Config):
|
||||
assert await coord.check()
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
simple_config.override(Setting.MAX_SYNC_INTERVAL_SECONDS, 60 * 60 * 6)
|
||||
simple_config.override(Setting.DEFAULT_SYNC_INTERVAL_VARIATION, 0)
|
||||
|
||||
assert coord.nextSyncAttempt() == time.now() + timedelta(hours=6)
|
||||
assert not await coord.check()
|
||||
old_sync = model.sync
|
||||
model.sync = lambda s: doRaise(Exception("BOOM"))
|
||||
await coord.sync()
|
||||
|
||||
# first backoff should be 0 seconds
|
||||
assert coord.nextSyncAttempt() == time.now()
|
||||
assert await coord.check()
|
||||
|
||||
# backoff maxes out at 2 hr = 7200 seconds
|
||||
for seconds in [10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 7200, 7200]:
|
||||
await coord.sync()
|
||||
assert coord.nextSyncAttempt() == time.now() + timedelta(seconds=seconds)
|
||||
assert not await coord.check()
|
||||
assert not await coord.check()
|
||||
assert not await coord.check()
|
||||
|
||||
# a good sync resets it back to 6 hours from now
|
||||
model.sync = old_sync
|
||||
await coord.sync()
|
||||
assert coord.nextSyncAttempt() == time.now() + timedelta(hours=6)
|
||||
assert not await coord.check()
|
||||
|
||||
# if the next backup is less that 6 hours from the last one, that that shoudl be when we sync
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1.0 / 24.0)
|
||||
assert coord.nextSyncAttempt() == time.now() + timedelta(hours=1)
|
||||
assert not await coord.check()
|
||||
|
||||
time.advance(hours=2)
|
||||
assert coord.nextSyncAttempt() == time.now() - timedelta(hours=1)
|
||||
assert await coord.check()
|
||||
|
||||
|
||||
def test_save_creds(coord: Coordinator, source, dest):
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_check_size_new_backup(coord: Coordinator, source: HelperTestSource, dest: HelperTestSource, time, fs: FsFaker):
|
||||
skipForWindows()
|
||||
fs.setFreeBytes(0)
|
||||
with raises(LowSpaceError):
|
||||
await coord.startBackup(CreateOptions(time.now(), "Test Name"))
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_check_size_sync(coord: Coordinator, source: HelperTestSource, dest: HelperTestSource, time, fs: FsFaker, global_info: GlobalInfo):
|
||||
skipForWindows()
|
||||
fs.setFreeBytes(0)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 0
|
||||
assert global_info._last_error is not None
|
||||
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 0
|
||||
assert global_info._last_error is not None
|
||||
|
||||
# Verify it resets the global size skip check, but gets through once
|
||||
global_info.setSkipSpaceCheckOnce(True)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
assert global_info._last_error is None
|
||||
assert not global_info.isSkipSpaceCheckOnce()
|
||||
|
||||
# Next attempt to backup shoudl fail again.
|
||||
time.advance(days=7)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
assert global_info._last_error is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_cancel(coord: Coordinator, global_info: GlobalInfo):
|
||||
coord._sync_wait.clear()
|
||||
asyncio.create_task(coord.sync())
|
||||
await coord._sync_start.wait()
|
||||
await coord.cancel()
|
||||
assert isinstance(global_info._last_error, UserCancelledError)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_working_through_upload(coord: Coordinator, global_info: GlobalInfo, dest):
|
||||
coord._sync_wait.clear()
|
||||
assert not coord.isWorkingThroughUpload()
|
||||
sync_task = asyncio.create_task(coord.sync())
|
||||
await coord._sync_start.wait()
|
||||
assert not coord.isWorkingThroughUpload()
|
||||
dest.working = True
|
||||
assert coord.isWorkingThroughUpload()
|
||||
coord._sync_wait.set()
|
||||
await asyncio.wait([sync_task])
|
||||
assert not coord.isWorkingThroughUpload()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_alternate_timezone(coord: Coordinator, time: FakeTime, model: Model, dest, source, simple_config: Config):
|
||||
time.setTimeZone("Europe/Stockholm")
|
||||
simple_config.override(Setting.BACKUP_TIME_OF_DAY, "12:00")
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
|
||||
source.setMax(10)
|
||||
source.insert("Fri", time.toUtc(time.local(2020, 3, 16, 18, 5)))
|
||||
time.setNow(time.local(2020, 3, 16, 18, 6))
|
||||
model.reinitialize()
|
||||
coord.reset()
|
||||
await coord.sync()
|
||||
assert not await coord.check()
|
||||
assert coord.nextBackupTime() == time.local(2020, 3, 17, 12)
|
||||
|
||||
time.setNow(time.local(2020, 3, 17, 11, 59))
|
||||
await coord.sync()
|
||||
assert not await coord.check()
|
||||
time.setNow(time.local(2020, 3, 17, 12))
|
||||
assert await coord.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_disabled_at_install(coord: Coordinator, dest, time):
|
||||
"""
|
||||
Verifies that at install time, if some backups are already present the
|
||||
addon doesn't try to sync over and over when drive is disabled. This was
|
||||
a problem at one point.
|
||||
"""
|
||||
dest.setEnabled(True)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
|
||||
dest.setEnabled(False)
|
||||
time.advance(days=5)
|
||||
assert await coord.check()
|
||||
await coord.sync()
|
||||
assert not await coord.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_only_source_configured(coord: Coordinator, dest: HelperTestSource, time, source: HelperTestSource):
|
||||
source.setEnabled(True)
|
||||
dest.setEnabled(False)
|
||||
dest.setNeedsConfiguration(False)
|
||||
await coord.sync()
|
||||
assert len(coord.backups()) == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_schedule_backup_next_sync_attempt(coord: Coordinator, model, source: HelperTestSource, dest: HelperTestSource, backup, time: FakeTime, simple_config: Config):
|
||||
"""
|
||||
Next backup is before max sync interval is reached
|
||||
"""
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
simple_config.override(Setting.MAX_SYNC_INTERVAL_SECONDS, 60 * 60)
|
||||
simple_config.override(Setting.DEFAULT_SYNC_INTERVAL_VARIATION, 0)
|
||||
|
||||
time.setTimeZone("Europe/Stockholm")
|
||||
simple_config.override(Setting.BACKUP_TIME_OF_DAY, "03:23")
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
|
||||
source.setMax(10)
|
||||
source.insert("Fri", time.toUtc(time.local(2020, 3, 16, 3, 33)))
|
||||
|
||||
time.setNow(time.local(2020, 3, 17, 2, 29))
|
||||
model.reinitialize()
|
||||
coord.reset()
|
||||
await coord.sync()
|
||||
assert coord.nextBackupTime() == time.local(2020, 3, 17, 3, 23)
|
||||
assert coord.nextBackupTime() == coord.nextSyncAttempt()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_max_sync_interval_next_sync_attempt(coord: Coordinator, model, source: HelperTestSource, dest: HelperTestSource, backup, time: FakeTime, simple_config: Config):
|
||||
"""
|
||||
Next backup is after max sync interval is reached
|
||||
"""
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
simple_config.override(Setting.MAX_SYNC_INTERVAL_SECONDS, 60 * 60)
|
||||
simple_config.override(Setting.DEFAULT_SYNC_INTERVAL_VARIATION, 0)
|
||||
|
||||
time.setTimeZone("Europe/Stockholm")
|
||||
simple_config.override(Setting.BACKUP_TIME_OF_DAY, "03:23")
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
|
||||
source.setMax(10)
|
||||
source.insert("Fri", time.toUtc(time.local(2020, 3, 16, 3, 33)))
|
||||
time.setNow(time.local(2020, 3, 17, 1, 29))
|
||||
model.reinitialize()
|
||||
coord.reset()
|
||||
await coord.sync()
|
||||
assert coord.nextSyncAttempt() == time.local(2020, 3, 17, 2, 29)
|
||||
assert coord.nextBackupTime() > coord.nextSyncAttempt()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generational_only_ignored_snapshots(coord: Coordinator, model, source: HelperTestSource, dest: HelperTestSource, time: FakeTime, simple_config: Config, global_info: GlobalInfo):
|
||||
"""
|
||||
Verifies a sync with generational settings and only ignored snapshots doesn't cause an error.
|
||||
Setup is taken from https://github.com/sabeechen/hassio-google-drive-backup/issues/727
|
||||
"""
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
simple_config.override(Setting.GENERATIONAL_DAYS, 3)
|
||||
simple_config.override(Setting.GENERATIONAL_WEEKS, 4)
|
||||
simple_config.override(Setting.GENERATIONAL_DELETE_EARLY, True)
|
||||
simple_config.override(Setting.MAX_BACKUPS_IN_HA, 2)
|
||||
simple_config.override(Setting.MAX_BACKUPS_IN_GOOGLE_DRIVE, 6)
|
||||
|
||||
backup = source.insert("Fri", time.toUtc(time.local(2020, 3, 16, 3, 33)))
|
||||
backup.setIgnore(True)
|
||||
time.setNow(time.local(2020, 3, 16, 4, 0))
|
||||
dest.setEnabled(False)
|
||||
source.setEnabled(True)
|
||||
|
||||
await coord.sync()
|
||||
assert global_info._last_error is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_max_sync_interval_randomness(coord: Coordinator, model, source: HelperTestSource, dest: HelperTestSource, backup, time: FakeTime, simple_config: Config):
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
simple_config.override(Setting.MAX_SYNC_INTERVAL_SECONDS, 60 * 60)
|
||||
simple_config.override(Setting.DEFAULT_SYNC_INTERVAL_VARIATION, 0.5)
|
||||
|
||||
time.setTimeZone("Europe/Stockholm")
|
||||
simple_config.override(Setting.BACKUP_TIME_OF_DAY, "03:23")
|
||||
simple_config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
|
||||
source.setMax(10)
|
||||
source.insert("Fri", time.toUtc(time.local(2020, 3, 16, 3, 33)))
|
||||
time.setNow(time.local(2020, 3, 17, 1, 29))
|
||||
model.reinitialize()
|
||||
coord.reset()
|
||||
await coord.sync()
|
||||
next_attempt = coord.nextSyncAttempt()
|
||||
|
||||
# verify its within expected range
|
||||
assert next_attempt - time.now() <= timedelta(hours=1)
|
||||
assert next_attempt - time.now() >= timedelta(hours=0.5)
|
||||
|
||||
# verify it doesn't change
|
||||
assert coord.nextSyncAttempt() == next_attempt
|
||||
time.advance(minutes=1)
|
||||
assert coord.nextSyncAttempt() == next_attempt
|
||||
|
||||
# sync, and verify it does change
|
||||
await coord.sync()
|
||||
assert coord.nextSyncAttempt() != next_attempt
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_precaching(coord: Coordinator, precache: DestinationPrecache, dest: HelperTestSource, time: FakeTime, global_info: GlobalInfo):
|
||||
await coord.sync()
|
||||
dest.reset()
|
||||
|
||||
# Warm the cache
|
||||
assert precache.getNextWarmDate() < coord.nextSyncAttempt()
|
||||
assert precache.cached(dest.name(), time.now()) is None
|
||||
assert dest.query_count == 0
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(dest.name(), time.now()) is not None
|
||||
assert dest.query_count == 1
|
||||
|
||||
# No queries should have been made to dest, and the cache should now be cleared
|
||||
time.setNow(coord.nextSyncAttempt())
|
||||
assert precache.cached(dest.name(), time.now()) is not None
|
||||
await coord.sync()
|
||||
assert dest.query_count == 1
|
||||
assert precache.cached(dest.name(), time.now()) is None
|
||||
assert global_info._last_error is None
|
||||
210
hassio-google-drive-backup/tests/test_data_cache.py
Normal file
210
hassio-google-drive-backup/tests/test_data_cache.py
Normal file
@@ -0,0 +1,210 @@
|
||||
import pytest
|
||||
import os
|
||||
import json
|
||||
from injector import Injector
|
||||
from datetime import timedelta
|
||||
from backup.config import Config, Setting, VERSION, Version
|
||||
from backup.util import DataCache, UpgradeFlags, KEY_CREATED, KEY_LAST_SEEN, CACHE_EXPIRATION_DAYS
|
||||
from backup.time import Time
|
||||
from os.path import join
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_read_and_write(config: Config, time: Time) -> None:
|
||||
cache = DataCache(config, time)
|
||||
assert len(cache.backups) == 0
|
||||
|
||||
cache.backup("test")[KEY_CREATED] = time.now().isoformat()
|
||||
assert not cache._dirty
|
||||
cache.makeDirty()
|
||||
assert cache._dirty
|
||||
cache.saveIfDirty()
|
||||
assert not cache._dirty
|
||||
|
||||
cache = DataCache(config, time)
|
||||
assert cache.backup("test")[KEY_CREATED] == time.now().isoformat()
|
||||
assert not cache._dirty
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_backup_expiration(config: Config, time: Time) -> None:
|
||||
cache = DataCache(config, time)
|
||||
assert len(cache.backups) == 0
|
||||
|
||||
cache.backup("new")[KEY_LAST_SEEN] = time.now().isoformat()
|
||||
cache.backup("old")[KEY_LAST_SEEN] = (
|
||||
time.now() - timedelta(days=CACHE_EXPIRATION_DAYS + 1)) .isoformat()
|
||||
cache.makeDirty()
|
||||
cache.saveIfDirty()
|
||||
|
||||
assert len(cache.backups) == 1
|
||||
assert "new" in cache.backups
|
||||
assert "old" not in cache.backups
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_version_upgrades(time: Time, injector: Injector, config: Config) -> None:
|
||||
# Simluate upgrading from an un-tracked version
|
||||
assert not os.path.exists(config.get(Setting.DATA_CACHE_FILE_PATH))
|
||||
cache = injector.get(DataCache)
|
||||
upgrade_time = time.now()
|
||||
assert cache.previousVersion == Version.default()
|
||||
assert cache.currentVersion == Version.parse(VERSION)
|
||||
|
||||
assert os.path.exists(config.get(Setting.DATA_CACHE_FILE_PATH))
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH)) as f:
|
||||
data = json.load(f)
|
||||
assert data["upgrades"] == [{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": VERSION,
|
||||
"date": upgrade_time.isoformat()
|
||||
}]
|
||||
|
||||
# Reload the data cache, verify there is no upgrade.
|
||||
time.advance(days=1)
|
||||
cache = DataCache(config, time)
|
||||
assert cache.previousVersion == Version.parse(VERSION)
|
||||
assert cache.currentVersion == Version.parse(VERSION)
|
||||
assert os.path.exists(config.get(Setting.DATA_CACHE_FILE_PATH))
|
||||
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH)) as f:
|
||||
data = json.load(f)
|
||||
assert data["upgrades"] == [{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": VERSION,
|
||||
"date": upgrade_time.isoformat()
|
||||
}]
|
||||
|
||||
# simulate upgrading to a new version, verify an upgrade gets identified.
|
||||
upgrade_version = Version.parse("200")
|
||||
|
||||
class UpgradeCache(DataCache):
|
||||
def __init__(self):
|
||||
super().__init__(config, time)
|
||||
|
||||
@property
|
||||
def currentVersion(self):
|
||||
return upgrade_version
|
||||
|
||||
cache = UpgradeCache()
|
||||
assert cache.previousVersion == Version.parse(VERSION)
|
||||
assert cache.currentVersion == upgrade_version
|
||||
assert os.path.exists(config.get(Setting.DATA_CACHE_FILE_PATH))
|
||||
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH)) as f:
|
||||
data = json.load(f)
|
||||
assert data["upgrades"] == [
|
||||
{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": VERSION,
|
||||
"date": upgrade_time.isoformat()
|
||||
},
|
||||
{
|
||||
"prev_version": VERSION,
|
||||
"new_version": str(upgrade_version),
|
||||
"date": time.now().isoformat()
|
||||
}
|
||||
]
|
||||
|
||||
next_upgrade_time = time.now()
|
||||
time.advance(days=1)
|
||||
# Verify version upgrade time queries work as expected
|
||||
assert cache.getUpgradeTime(Version.parse(VERSION)) == upgrade_time
|
||||
assert cache.getUpgradeTime(Version.default()) == upgrade_time
|
||||
assert cache.getUpgradeTime(upgrade_version) == next_upgrade_time
|
||||
|
||||
# degenerate case, should never happen but a sensible value needs to be returned
|
||||
assert cache.getUpgradeTime(Version.parse("201")) == time.now()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_flag(config: Config, time: Time):
|
||||
cache = DataCache(config, time)
|
||||
assert not cache.checkFlag(UpgradeFlags.TESTING_FLAG)
|
||||
assert not cache.dirty
|
||||
|
||||
cache.addFlag(UpgradeFlags.TESTING_FLAG)
|
||||
assert cache.dirty
|
||||
assert cache.checkFlag(UpgradeFlags.TESTING_FLAG)
|
||||
cache.saveIfDirty()
|
||||
|
||||
cache = DataCache(config, time)
|
||||
assert cache.checkFlag(UpgradeFlags.TESTING_FLAG)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_warn_upgrade_new_install(config: Config, time: Time):
|
||||
"""A fresh install of the addon should never warn about upgrade snapshots"""
|
||||
cache = DataCache(config, time)
|
||||
assert not cache.notifyForIgnoreUpgrades
|
||||
assert cache._config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_warn_upgrade_old_install(config: Config, time: Time):
|
||||
"""An old install of the addon warn about upgrade snapshots"""
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH), "w") as f:
|
||||
data = {
|
||||
"upgrades": [
|
||||
{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": "0.108.1",
|
||||
"date": time.now().isoformat()
|
||||
}
|
||||
]
|
||||
}
|
||||
json.dump(data, f)
|
||||
cache = DataCache(config, time)
|
||||
assert cache.notifyForIgnoreUpgrades
|
||||
assert not cache._config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_warn_upgrade_old_install_explicit_ignore_upgrades(config: Config, time: Time, cleandir: str):
|
||||
"""An old install of the addon should not warn about upgrade snapshots if it explicitly ignores them"""
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH), "w") as f:
|
||||
data = {
|
||||
"upgrades": [
|
||||
{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": "0.108.1",
|
||||
"date": time.now().isoformat()
|
||||
}
|
||||
]
|
||||
}
|
||||
json.dump(data, f)
|
||||
config_path = join(cleandir, "config.json")
|
||||
with open(config_path, "w") as f:
|
||||
data = {
|
||||
Setting.IGNORE_UPGRADE_BACKUPS.value: True,
|
||||
Setting.DATA_CACHE_FILE_PATH.value: config.get(Setting.DATA_CACHE_FILE_PATH)
|
||||
}
|
||||
json.dump(data, f)
|
||||
cache = DataCache(Config.fromFile(config_path), time)
|
||||
assert not cache.notifyForIgnoreUpgrades
|
||||
assert cache._config.get(Setting.IGNORE_UPGRADE_BACKUPS)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_warn_upgrade_old_install_explicit_ignore_others(config: Config, time: Time, cleandir: str):
|
||||
"""An old install of the addon should not warn about upgrade snapshots if it explicitly ignores them"""
|
||||
with open(config.get(Setting.DATA_CACHE_FILE_PATH), "w") as f:
|
||||
data = {
|
||||
"upgrades": [
|
||||
{
|
||||
"prev_version": str(Version.default()),
|
||||
"new_version": "0.108.1",
|
||||
"date": time.now().isoformat()
|
||||
}
|
||||
]
|
||||
}
|
||||
json.dump(data, f)
|
||||
config_path = join(cleandir, "config.json")
|
||||
with open(config_path, "w") as f:
|
||||
data = {
|
||||
Setting.IGNORE_OTHER_BACKUPS.value: True,
|
||||
Setting.DATA_CACHE_FILE_PATH.value: config.get(Setting.DATA_CACHE_FILE_PATH)
|
||||
}
|
||||
json.dump(data, f)
|
||||
cache = DataCache(Config.fromFile(config_path), time)
|
||||
assert not cache.notifyForIgnoreUpgrades
|
||||
142
hassio-google-drive-backup/tests/test_debugworker.py
Normal file
142
hassio-google-drive-backup/tests/test_debugworker.py
Normal file
@@ -0,0 +1,142 @@
|
||||
import pytest
|
||||
|
||||
from backup.config import Config, Setting
|
||||
from backup.debugworker import DebugWorker
|
||||
from backup.util import GlobalInfo
|
||||
from backup.logger import getLogger
|
||||
from dev.simulationserver import SimulationServer
|
||||
from .helpers import skipForWindows
|
||||
from backup.server import ErrorStore
|
||||
from .conftest import FakeTime
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_dns_info(debug_worker: DebugWorker, config: Config):
|
||||
skipForWindows()
|
||||
config.override(Setting.SEND_ERROR_REPORTS, True)
|
||||
config.override(Setting.DRIVE_HOST_NAME, "localhost")
|
||||
await debug_worker.doWork()
|
||||
assert '127.0.0.1' in debug_worker.dns_info['localhost']
|
||||
assert 'localhost' in debug_worker.dns_info['localhost']
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bad_host(debug_worker: DebugWorker, config: Config):
|
||||
skipForWindows()
|
||||
config.override(Setting.DRIVE_HOST_NAME, "dasdfdfgvxcvvsoejbr.com")
|
||||
await debug_worker.doWork()
|
||||
assert "Name or service not known" in debug_worker.dns_info['dasdfdfgvxcvvsoejbr.com']['dasdfdfgvxcvvsoejbr.com']
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_send_error_report(time, debug_worker: DebugWorker, config: Config, global_info: GlobalInfo, server, error_store: ErrorStore):
|
||||
config.override(Setting.SEND_ERROR_REPORTS, True)
|
||||
config.override(Setting.DRIVE_HOST_NAME, "localhost")
|
||||
global_info.sync()
|
||||
global_info.success()
|
||||
global_info.sync()
|
||||
global_info.success()
|
||||
global_info.sync()
|
||||
global_info.failed(Exception())
|
||||
await debug_worker.doWork()
|
||||
report = error_store.last_error
|
||||
assert report['report']['sync_success_count'] == 2
|
||||
assert report['report']['sync_count'] == 3
|
||||
assert report['report']['failure_count'] == 1
|
||||
assert report['report']['sync_last_start'] == time.now().isoformat()
|
||||
assert report['report']['failure_time'] == time.now().isoformat()
|
||||
assert report['report']['error'] == getLogger("test").formatException(Exception())
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_dont_send_error_report(time, debug_worker: DebugWorker, config: Config, global_info: GlobalInfo, server: SimulationServer, error_store: ErrorStore):
|
||||
config.override(Setting.SEND_ERROR_REPORTS, False)
|
||||
config.override(Setting.DRIVE_HOST_NAME, "localhost")
|
||||
global_info.failed(Exception())
|
||||
await debug_worker.doWork()
|
||||
assert error_store.last_error is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_only_send_duplicates(time, debug_worker: DebugWorker, config: Config, global_info: GlobalInfo, server, error_store: ErrorStore):
|
||||
config.override(Setting.SEND_ERROR_REPORTS, True)
|
||||
config.override(Setting.DRIVE_HOST_NAME, "localhost")
|
||||
global_info.failed(Exception("boom1"))
|
||||
firstExceptionTime = time.now()
|
||||
await debug_worker.doWork()
|
||||
report = error_store.last_error
|
||||
assert report['report']["error"] == getLogger("test").formatException(Exception("boom1"))
|
||||
assert report['report']["time"] == firstExceptionTime.isoformat()
|
||||
|
||||
# Same exception shouldn't cause us to send the error report again
|
||||
time.advance(days=1)
|
||||
global_info.failed(Exception("boom1"))
|
||||
await debug_worker.doWork()
|
||||
report = error_store.last_error
|
||||
assert report['report']["error"] == getLogger("test").formatException(Exception("boom1"))
|
||||
assert report['report']["time"] == firstExceptionTime.isoformat()
|
||||
|
||||
# Btu a new one will send a new report
|
||||
global_info.failed(Exception("boom2"))
|
||||
await debug_worker.doWork()
|
||||
report = error_store.last_error
|
||||
assert report['report']["error"] == getLogger("test").formatException(Exception("boom2"))
|
||||
assert report['report']["time"] == time.now().isoformat()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_send_clear(time, debug_worker: DebugWorker, config: Config, global_info: GlobalInfo, server, error_store: ErrorStore):
|
||||
config.override(Setting.SEND_ERROR_REPORTS, True)
|
||||
config.override(Setting.DRIVE_HOST_NAME, "localhost")
|
||||
|
||||
# Simulate failure
|
||||
global_info.failed(Exception("boom"))
|
||||
await debug_worker.doWork()
|
||||
|
||||
# And then success
|
||||
global_info.success()
|
||||
time.advance(days=1)
|
||||
await debug_worker.doWork()
|
||||
report = error_store.last_error
|
||||
assert report['report'] == {
|
||||
'duration': '1 day, 0:00:00'
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_health_check_timing_success(server_url, time: FakeTime, debug_worker: DebugWorker, config: Config, server: SimulationServer):
|
||||
# Only do successfull checks once a day
|
||||
await debug_worker.doWork()
|
||||
assert server.interceptor.urlWasCalled("/health")
|
||||
server.interceptor.clear()
|
||||
|
||||
await debug_worker.doWork()
|
||||
assert not server.interceptor.urlWasCalled("/health")
|
||||
|
||||
time.advance(hours=23)
|
||||
await debug_worker.doWork()
|
||||
assert not server.interceptor.urlWasCalled("/health")
|
||||
|
||||
time.advance(hours=2)
|
||||
await debug_worker.doWork()
|
||||
assert server.interceptor.urlWasCalled("/health")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_health_check_timing_failure(server_url, time: FakeTime, debug_worker: DebugWorker, config: Config, server: SimulationServer):
|
||||
# Failed helath checks retry after a minute
|
||||
server.interceptor.setError("/health", 500)
|
||||
|
||||
await debug_worker.doWork()
|
||||
assert server.interceptor.urlWasCalled("/health")
|
||||
server.interceptor.clear()
|
||||
|
||||
await debug_worker.doWork()
|
||||
assert not server.interceptor.urlWasCalled("/health")
|
||||
|
||||
time.advance(seconds=59)
|
||||
await debug_worker.doWork()
|
||||
assert not server.interceptor.urlWasCalled("/health")
|
||||
|
||||
time.advance(seconds=2)
|
||||
await debug_worker.doWork()
|
||||
assert server.interceptor.urlWasCalled("/health")
|
||||
119
hassio-google-drive-backup/tests/test_destinationprecache.py
Normal file
119
hassio-google-drive-backup/tests/test_destinationprecache.py
Normal file
@@ -0,0 +1,119 @@
|
||||
|
||||
|
||||
from backup.model import DestinationPrecache, Model, Coordinator
|
||||
from backup.config import Config, Setting
|
||||
from tests.faketime import FakeTime
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from dev.simulated_google import URL_MATCH_DRIVE_API
|
||||
from backup.drive import DriveSource
|
||||
from datetime import timedelta
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_caching_before_cache_time(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
interceptor.clear()
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.getNextWarmDate() > time.now()
|
||||
assert not interceptor.urlWasCalled(URL_MATCH_DRIVE_API)
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_caching_after_sync_time(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
time.setNow(coord.nextSyncAttempt())
|
||||
interceptor.clear()
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.getNextWarmDate() < time.now()
|
||||
assert not interceptor.urlWasCalled(URL_MATCH_DRIVE_API)
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_cache_after_warm_date(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
interceptor.clear()
|
||||
assert precache.getNextWarmDate() < coord.nextSyncAttempt()
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert interceptor.urlWasCalled(URL_MATCH_DRIVE_API)
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
|
||||
|
||||
async def test_no_double_caching(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
interceptor.clear()
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
|
||||
interceptor.clear()
|
||||
time.setNow(precache.getNextWarmDate() + (coord.nextSyncAttempt() - precache.getNextWarmDate()) / 2)
|
||||
await precache.checkForSmoothing()
|
||||
assert not interceptor.urlWasCalled(URL_MATCH_DRIVE_API)
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
|
||||
|
||||
async def test_cache_expiration(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
|
||||
time.setNow(coord.nextSyncAttempt() + timedelta(minutes=2))
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
|
||||
|
||||
async def test_cache_clear(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
|
||||
precache.clear()
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
|
||||
|
||||
async def test_cache_error_backoff(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
interceptor.setError(URL_MATCH_DRIVE_API, status=503)
|
||||
await precache.checkForSmoothing()
|
||||
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
delta = precache.getNextWarmDate() - time.now()
|
||||
assert delta >= timedelta(days=1)
|
||||
|
||||
|
||||
async def test_cache_warm_date_stability(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime) -> None:
|
||||
await coord.sync()
|
||||
|
||||
# The warm date shouldn't change
|
||||
last_warm = precache.getNextWarmDate()
|
||||
assert precache.getNextWarmDate() == last_warm
|
||||
time.setNow(last_warm - timedelta(minutes=1))
|
||||
assert precache.getNextWarmDate() == last_warm
|
||||
|
||||
# Until the cached is warmed
|
||||
time.setNow(last_warm)
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(drive.name(), time.now()) is not None
|
||||
assert precache.getNextWarmDate() != last_warm
|
||||
|
||||
|
||||
async def test_disable_caching(server, precache: DestinationPrecache, model: Model, drive: DriveSource, interceptor: RequestInterceptor, coord: Coordinator, time: FakeTime, config: Config) -> None:
|
||||
await coord.sync()
|
||||
config.override(Setting.CACHE_WARMUP_MAX_SECONDS, 0)
|
||||
|
||||
time.setNow(precache.getNextWarmDate())
|
||||
await precache.checkForSmoothing()
|
||||
assert precache.cached(drive.name(), time.now()) is None
|
||||
1048
hassio-google-drive-backup/tests/test_drivesource.py
Normal file
1048
hassio-google-drive-backup/tests/test_drivesource.py
Normal file
File diff suppressed because it is too large
Load Diff
89
hassio-google-drive-backup/tests/test_duration_parser.py
Normal file
89
hassio-google-drive-backup/tests/test_duration_parser.py
Normal file
@@ -0,0 +1,89 @@
|
||||
from backup.config.durationparser import DurationParser
|
||||
from datetime import timedelta
|
||||
|
||||
|
||||
def test_parse_days():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("1 days") == timedelta(days=1)
|
||||
assert parser.parse("5 days") == timedelta(days=5)
|
||||
assert parser.parse("5 d") == timedelta(days=5)
|
||||
assert parser.parse("5d") == timedelta(days=5)
|
||||
assert parser.parse("5.0d") == timedelta(days=5)
|
||||
assert parser.parse("5.0day") == timedelta(days=5)
|
||||
assert parser.parse("5.0 day") == timedelta(days=5)
|
||||
assert parser.parse("5.5 days") == timedelta(days=5, hours=12)
|
||||
|
||||
|
||||
def test_parse_hours():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("1 hours") == timedelta(hours=1)
|
||||
assert parser.parse("5 hours") == timedelta(hours=5)
|
||||
assert parser.parse("5 h") == timedelta(hours=5)
|
||||
assert parser.parse("5hour") == timedelta(hours=5)
|
||||
assert parser.parse("5.0h") == timedelta(hours=5)
|
||||
assert parser.parse("5.0 hour") == timedelta(hours=5)
|
||||
assert parser.parse("5.5 h") == timedelta(hours=5, minutes=30)
|
||||
|
||||
|
||||
def test_parse_minutes():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("1 minutes") == timedelta(minutes=1)
|
||||
assert parser.parse("5 min") == timedelta(minutes=5)
|
||||
assert parser.parse("5 m") == timedelta(minutes=5)
|
||||
assert parser.parse("5mins") == timedelta(minutes=5)
|
||||
assert parser.parse("5.0m") == timedelta(minutes=5)
|
||||
assert parser.parse("5.0 min") == timedelta(minutes=5)
|
||||
assert parser.parse("5.5 m") == timedelta(minutes=5, seconds=30)
|
||||
|
||||
|
||||
def test_parse_seconds():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("1 seconds") == timedelta(seconds=1)
|
||||
assert parser.parse("5 sec") == timedelta(seconds=5)
|
||||
assert parser.parse("5 s") == timedelta(seconds=5)
|
||||
assert parser.parse("5secs") == timedelta(seconds=5)
|
||||
assert parser.parse("5.0s") == timedelta(seconds=5)
|
||||
assert parser.parse("5.0 secs") == timedelta(seconds=5)
|
||||
assert parser.parse("5.5 s") == timedelta(seconds=5, milliseconds=500)
|
||||
|
||||
|
||||
def test_parse_multiple():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("1 day, 5 hours, 30 seconds") == timedelta(days=1, hours=5, seconds=30)
|
||||
assert parser.parse("1 day 5 hours 30 seconds") == timedelta(days=1, hours=5, seconds=30)
|
||||
assert parser.parse("1d 5 hours 30s") == timedelta(days=1, hours=5, seconds=30)
|
||||
assert parser.parse("1d 5h 30s") == timedelta(days=1, hours=5, seconds=30)
|
||||
assert parser.parse("5m 1d 5h 30s") == timedelta(days=1, hours=5, minutes=5, seconds=30)
|
||||
|
||||
|
||||
def test_format():
|
||||
parser = DurationParser()
|
||||
assert parser.format(timedelta(days=1)) == "1 days"
|
||||
assert parser.format(timedelta(seconds=86400)) == "1 days"
|
||||
assert parser.format(timedelta(hours=1)) == "1 hours"
|
||||
assert parser.format(timedelta(minutes=1)) == "1 minutes"
|
||||
assert parser.format(timedelta(seconds=60)) == "1 minutes"
|
||||
assert parser.format(timedelta(seconds=5)) == "5 seconds"
|
||||
assert parser.format(timedelta(seconds=1)) == "1 seconds"
|
||||
assert parser.format(timedelta(days=5, hours=6, minutes=7)) == "5 days, 6 hours, 7 minutes"
|
||||
assert parser.format(timedelta(days=5, hours=6, minutes=7, seconds=8)) == "5 days, 6 hours, 7 minutes, 8 seconds"
|
||||
|
||||
|
||||
def test_back_and_forth():
|
||||
doTestConvert(timedelta(hours=5))
|
||||
doTestConvert(timedelta(minutes=600))
|
||||
doTestConvert(timedelta(days=30))
|
||||
doTestConvert(timedelta(days=5, minutes=6, hours=10, seconds=20))
|
||||
|
||||
|
||||
def doTestConvert(duration):
|
||||
parser = DurationParser()
|
||||
assert parser.parse(parser.format(duration)) == duration
|
||||
|
||||
|
||||
def test_convert_empty_seconds():
|
||||
parser = DurationParser()
|
||||
assert parser.parse("") == timedelta(seconds=0)
|
||||
assert parser.parse("0") == timedelta(seconds=0)
|
||||
assert parser.parse("30") == timedelta(seconds=30)
|
||||
assert parser.parse(str(60 * 60)) == timedelta(seconds=60 * 60)
|
||||
@@ -0,0 +1,28 @@
|
||||
from backup.config import DurationAsStringValidator
|
||||
from backup.exceptions import InvalidConfigurationValue
|
||||
from datetime import timedelta
|
||||
import pytest
|
||||
|
||||
|
||||
def test_minimum():
|
||||
parser = DurationAsStringValidator("test", minimum=10)
|
||||
assert parser.validate("11 seconds") == 11
|
||||
assert parser.validate(11) == 11
|
||||
with pytest.raises(InvalidConfigurationValue):
|
||||
parser.validate("9 seconds")
|
||||
|
||||
|
||||
def test_maximum():
|
||||
parser = DurationAsStringValidator("test", maximum=10)
|
||||
assert parser.validate("9 seconds") == 9
|
||||
assert parser.validate(9) == 9
|
||||
with pytest.raises(InvalidConfigurationValue):
|
||||
parser.validate("11 seconds")
|
||||
assert parser.formatForUi(9) == "9 seconds"
|
||||
|
||||
|
||||
def test_base():
|
||||
parser = DurationAsStringValidator("test", base_seconds=60)
|
||||
assert parser.validate("60 seconds") == 1
|
||||
assert parser.validate(60) == 60
|
||||
assert parser.formatForUi(1) == "1 minutes"
|
||||
13
hassio-google-drive-backup/tests/test_estimator.py
Normal file
13
hassio-google-drive-backup/tests/test_estimator.py
Normal file
@@ -0,0 +1,13 @@
|
||||
import pytest
|
||||
from backup.util import Estimator
|
||||
from backup.config import Config, Setting
|
||||
from backup.exceptions import LowSpaceError
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_check_space(estimator: Estimator, coord, config: Config):
|
||||
estimator.refresh()
|
||||
estimator.checkSpace(coord.backups())
|
||||
|
||||
config.override(Setting.LOW_SPACE_THRESHOLD, estimator.getBytesFree() + 1)
|
||||
with pytest.raises(LowSpaceError):
|
||||
estimator.checkSpace(coord.backups())
|
||||
49
hassio-google-drive-backup/tests/test_exceptions.py
Normal file
49
hassio-google-drive-backup/tests/test_exceptions.py
Normal file
@@ -0,0 +1,49 @@
|
||||
from bs4 import BeautifulSoup
|
||||
import backup.exceptions
|
||||
import inspect
|
||||
import pytest
|
||||
from backup.exceptions import GoogleCredGenerateError, KnownError, KnownTransient, SimulatedError, GoogleDrivePermissionDenied, InvalidConfigurationValue, LogicError, ProtocolError, NoBackup, NotUploadable, PleaseWait, UploadFailed
|
||||
from .conftest import ReaderHelper
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_verify_coverage(ui_server, reader: ReaderHelper):
|
||||
# Get the list of exception codes
|
||||
ignore = [
|
||||
KnownError,
|
||||
KnownTransient,
|
||||
SimulatedError,
|
||||
GoogleDrivePermissionDenied,
|
||||
InvalidConfigurationValue,
|
||||
LogicError,
|
||||
NoBackup,
|
||||
NotUploadable,
|
||||
PleaseWait,
|
||||
ProtocolError,
|
||||
UploadFailed,
|
||||
GoogleCredGenerateError,
|
||||
]
|
||||
codes = {}
|
||||
for name, obj in inspect.getmembers(backup.exceptions):
|
||||
if inspect.isclass(obj) and (KnownError in obj.__bases__) and obj not in ignore:
|
||||
codes[obj().code()] = obj
|
||||
|
||||
# Get the list of ui dialogs
|
||||
document = await reader.get("", json=False)
|
||||
page = BeautifulSoup(document, 'html.parser')
|
||||
|
||||
dialogs = {}
|
||||
for div in page.find_all("div"):
|
||||
cls = div.get("class")
|
||||
if cls is None:
|
||||
continue
|
||||
if "error_card" in cls:
|
||||
for specific_class in cls:
|
||||
if specific_class in dialogs:
|
||||
dialogs[specific_class] = dialogs[specific_class] + 1
|
||||
else:
|
||||
dialogs[specific_class] = 1
|
||||
|
||||
# Make sure exactly one dialog has the class
|
||||
for code in codes.keys():
|
||||
assert dialogs[code] == 1
|
||||
186
hassio-google-drive-backup/tests/test_exchanger.py
Normal file
186
hassio-google-drive-backup/tests/test_exchanger.py
Normal file
@@ -0,0 +1,186 @@
|
||||
import pytest
|
||||
|
||||
from dev.simulationserver import SimulationServer, RequestInterceptor
|
||||
from backup.time import Time
|
||||
from backup.config import Config, Setting
|
||||
from backup.drive import DriveRequests
|
||||
from backup.exceptions import CredRefreshMyError, GoogleCredentialsExpired, CredRefreshGoogleError
|
||||
from backup.tracing_session import TracingSession
|
||||
from yarl import URL
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_correct_host(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, server_url, interceptor: RequestInterceptor):
|
||||
# Verify the correct endpoitns get called for a successful request
|
||||
session.record = True
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
assert interceptor.urlWasCalled("/drive/refresh")
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_some_bad_hosts(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, server_url, interceptor: RequestInterceptor):
|
||||
session.record = True
|
||||
config.override(Setting.EXCHANGER_TIMEOUT_SECONDS, 1)
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, "https://this.goes.nowhere.info," + str(server_url))
|
||||
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
assert interceptor.urlWasCalled("/drive/refresh")
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == URL("https://this.goes.nowhere.info").with_path("/drive/refresh")
|
||||
session._records[1]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_all_bad_hosts(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor):
|
||||
session.record = True
|
||||
config.override(Setting.EXCHANGER_TIMEOUT_SECONDS, 1)
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, "https://this.goes.nowhere.info,http://also.a.bad.host")
|
||||
|
||||
with pytest.raises(CredRefreshMyError) as e:
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Error should be about the last host name
|
||||
assert e.value.reason.index("also.a.bad.host") >= 0
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == URL("https://this.goes.nowhere.info").with_path("/drive/refresh")
|
||||
session._records[1]['url'] == URL("http://also.a.bad.host").with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_exchange_timeout(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setSleep("/drive/refresh", sleep=10)
|
||||
|
||||
config.override(Setting.EXCHANGER_TIMEOUT_SECONDS, 0.1)
|
||||
|
||||
with pytest.raises(CredRefreshMyError) as e:
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Error should be about the last host name
|
||||
assert e.value.reason == "Timed out communicating with localhost"
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_exchange_invalid_creds(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
drive_requests.creds._refresh_token = "fail"
|
||||
with pytest.raises(GoogleCredentialsExpired):
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_503_with_error(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setError("^/drive/refresh$", 503, response={'error': 'test_value'})
|
||||
with pytest.raises(CredRefreshGoogleError) as e:
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
assert e.value.message() == "Couldn't refresh your credentials with Google because: 'test_value'"
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_503_invalid_grant(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setError("^/drive/refresh$", 503, response={'error': 'invalid_grant'})
|
||||
with pytest.raises(GoogleCredentialsExpired):
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_503_with_invalid_json(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setError("^/drive/refresh$", 503, response={'ignored': 'nothing'})
|
||||
with pytest.raises(CredRefreshMyError) as e:
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
assert e.value.message() == "Couldn't refresh Google Drive credentials because: HTTP 503 from localhost"
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_503_with_no_data(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setError("^/drive/refresh$", 503)
|
||||
with pytest.raises(CredRefreshMyError) as e:
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
assert e.value.message() == "Couldn't refresh Google Drive credentials because: HTTP 503 from localhost"
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_401(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
interceptor.setError("^/drive/refresh$", 401)
|
||||
with pytest.raises(GoogleCredentialsExpired):
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fail_401_no_fall_through(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, str(server_url) + "," + str(server_url))
|
||||
interceptor.setError("^/drive/refresh$", 401)
|
||||
with pytest.raises(GoogleCredentialsExpired):
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
assert len(session._records) == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_invalid_grant_no_fall_through(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, str(server_url) + "," + str(server_url))
|
||||
interceptor.setError("^/drive/refresh$", 503, response={'error': 'invalid_grant'})
|
||||
with pytest.raises(GoogleCredentialsExpired):
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
assert len(session._records) == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_timeout_fall_through(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
config.override(Setting.EXCHANGER_TIMEOUT_SECONDS, 0.1)
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, str(server_url) + "," + str(server_url))
|
||||
interceptor.setSleep("^/drive/refresh$", sleep=10, wait_for=1)
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
session._records[1]['url'] == server_url.with_path("/drive/refresh")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_anything_else_through(time: Time, session: TracingSession, config: Config, server: SimulationServer, drive_requests: DriveRequests, interceptor: RequestInterceptor, server_url: URL):
|
||||
session.record = True
|
||||
config.override(Setting.TOKEN_SERVER_HOSTS, str(server_url) + "," + str(server_url))
|
||||
interceptor.setError("^/drive/refresh$", status=500, fail_for=1)
|
||||
await drive_requests.exchanger.refresh(drive_requests.creds)
|
||||
|
||||
# Verify both hosts were checked
|
||||
session._records[0]['url'] == server_url.with_path("/drive/refresh")
|
||||
session._records[1]['url'] == server_url.with_path("/drive/refresh")
|
||||
60
hassio-google-drive-backup/tests/test_file.py
Normal file
60
hassio-google-drive-backup/tests/test_file.py
Normal file
@@ -0,0 +1,60 @@
|
||||
|
||||
|
||||
from backup.file import File
|
||||
from os.path import exists, join
|
||||
from os import remove
|
||||
import pytest
|
||||
import json
|
||||
|
||||
TEST_DATA = "when you press my special key I play a little melody"
|
||||
|
||||
|
||||
def readfile(path):
|
||||
with open(path) as f:
|
||||
return f.read()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basic(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
backup_path = join(tmpdir, "test.json.backup")
|
||||
|
||||
assert not File.exists(path)
|
||||
File.write(path, TEST_DATA)
|
||||
assert File.exists(path)
|
||||
assert readfile(path) == TEST_DATA
|
||||
assert readfile(backup_path) == TEST_DATA
|
||||
assert File.read(path) == TEST_DATA
|
||||
|
||||
File.delete(path)
|
||||
assert not exists(path)
|
||||
assert not exists(backup_path)
|
||||
assert not File.exists(path)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_file_deleted(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
File.write(path, TEST_DATA)
|
||||
remove(path)
|
||||
assert File.read(path) == TEST_DATA
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_backup_deleted(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
backup_path = join(tmpdir, "test.json.backup")
|
||||
File.write(path, TEST_DATA)
|
||||
remove(backup_path)
|
||||
assert File.read(path) == TEST_DATA
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_decode_error(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
File.write(path, TEST_DATA)
|
||||
with open(path, "w"):
|
||||
# emptys the file contents
|
||||
pass
|
||||
with open(path) as f:
|
||||
assert len(f.read()) == 0
|
||||
assert File.read(path) == TEST_DATA
|
||||
1168
hassio-google-drive-backup/tests/test_hasource.py
Normal file
1168
hassio-google-drive-backup/tests/test_hasource.py
Normal file
File diff suppressed because it is too large
Load Diff
416
hassio-google-drive-backup/tests/test_haupdater.py
Normal file
416
hassio-google-drive-backup/tests/test_haupdater.py
Normal file
@@ -0,0 +1,416 @@
|
||||
from datetime import timedelta
|
||||
from backup.model.backups import Backup
|
||||
import pytest
|
||||
|
||||
from backup.util import GlobalInfo
|
||||
from backup.ha import HaUpdater
|
||||
from backup.ha.haupdater import REASSURING_MESSAGE
|
||||
from .faketime import FakeTime
|
||||
from .helpers import HelperTestSource
|
||||
from dev.simulationserver import SimulationServer
|
||||
from backup.logger import getLast
|
||||
from backup.util import Estimator
|
||||
from dev.simulated_supervisor import SimulatedSupervisor, URL_MATCH_CORE_API
|
||||
from dev.request_interceptor import RequestInterceptor
|
||||
from backup.model import Coordinator
|
||||
from backup.config import Config, Setting
|
||||
|
||||
STALE_ATTRIBUTES = {
|
||||
"friendly_name": "Backups Stale",
|
||||
"device_class": "problem"
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def source():
|
||||
return HelperTestSource("Source")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def dest():
|
||||
return HelperTestSource("Dest")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_init(updater: HaUpdater, global_info, supervisor: SimulatedSupervisor, server, time: FakeTime):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
verifyEntity(supervisor, "binary_sensor.backups_stale",
|
||||
"off", STALE_ATTRIBUTES)
|
||||
verifyEntity(supervisor, "sensor.backup_state", "waiting", {
|
||||
'friendly_name': 'Backup State',
|
||||
'last_backup': 'Never',
|
||||
'next_backup': time.now().isoformat(),
|
||||
'last_uploaded': 'Never',
|
||||
'backups': [],
|
||||
'backups_in_google_drive': 0,
|
||||
'free_space_in_google_drive': "",
|
||||
'backups_in_home_assistant': 0,
|
||||
'size_in_google_drive': "0.0 B",
|
||||
'size_in_home_assistant': '0.0 B'
|
||||
})
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
global_info.success()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "backed_up"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_init_failure(updater: HaUpdater, global_info: GlobalInfo, time: FakeTime, server, supervisor: SimulatedSupervisor):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
|
||||
global_info.failed(Exception())
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "backed_up"
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
time.advanceDay()
|
||||
assert updater._stale()
|
||||
assert updater._state() == "error"
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() == {
|
||||
'message': 'The add-on is having trouble making backups and needs attention. Please visit the add-on status page for details.',
|
||||
'title': 'Home Assistant Google Drive Backup is Having Trouble',
|
||||
'notification_id': 'backup_broken'
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_failure_backoff_502(updater: HaUpdater, server, time: FakeTime, interceptor: RequestInterceptor):
|
||||
interceptor.setError(URL_MATCH_CORE_API, 502)
|
||||
for x in range(9):
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
|
||||
interceptor.clear()
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_failure_backoff_510(updater: HaUpdater, server, time: FakeTime, interceptor: RequestInterceptor):
|
||||
interceptor.setError(URL_MATCH_CORE_API, 502)
|
||||
for x in range(9):
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
|
||||
interceptor.clear()
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_failure_backoff_other(updater: HaUpdater, server, time: FakeTime, interceptor: RequestInterceptor):
|
||||
interceptor.setError(URL_MATCH_CORE_API, 400)
|
||||
for x in range(9):
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
interceptor.clear()
|
||||
await updater.update()
|
||||
assert time.sleeps == [60, 120, 240, 300, 300, 300, 300, 300, 300]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_backups(updater: HaUpdater, server, time: FakeTime, supervisor: SimulatedSupervisor):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
verifyEntity(supervisor, "binary_sensor.backups_stale",
|
||||
"off", STALE_ATTRIBUTES)
|
||||
verifyEntity(supervisor, "sensor.backup_state", "waiting", {
|
||||
'friendly_name': 'Backup State',
|
||||
'last_backup': 'Never',
|
||||
'next_backup': time.now().isoformat(),
|
||||
'last_uploaded': 'Never',
|
||||
'backups': [],
|
||||
'backups_in_google_drive': 0,
|
||||
'backups_in_home_assistant': 0,
|
||||
'size_in_home_assistant': "0.0 B",
|
||||
'size_in_google_drive': "0.0 B",
|
||||
'free_space_in_google_drive': ''
|
||||
})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_backups_no_next_backup(updater: HaUpdater, server, time: FakeTime, supervisor: SimulatedSupervisor, config: Config):
|
||||
config.override(Setting.DAYS_BETWEEN_BACKUPS, 0)
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
verifyEntity(supervisor, "binary_sensor.backups_stale",
|
||||
"off", STALE_ATTRIBUTES)
|
||||
verifyEntity(supervisor, "sensor.backup_state", "waiting", {
|
||||
'friendly_name': 'Backup State',
|
||||
'last_backup': 'Never',
|
||||
'next_backup': None,
|
||||
'last_uploaded': 'Never',
|
||||
'backups': [],
|
||||
'backups_in_google_drive': 0,
|
||||
'backups_in_home_assistant': 0,
|
||||
'size_in_home_assistant': "0.0 B",
|
||||
'size_in_google_drive': "0.0 B",
|
||||
'free_space_in_google_drive': ''
|
||||
})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_backups_sync(updater: HaUpdater, server, time: FakeTime, backup: Backup, supervisor: SimulatedSupervisor, config: Config):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "backed_up"
|
||||
verifyEntity(supervisor, "binary_sensor.backups_stale",
|
||||
"off", STALE_ATTRIBUTES)
|
||||
date = '1985-12-06T05:00:00+00:00'
|
||||
verifyEntity(supervisor, "sensor.backup_state", "backed_up", {
|
||||
'friendly_name': 'Backup State',
|
||||
'last_backup': date,
|
||||
'last_uploaded': date,
|
||||
'next_backup': (backup.date() + timedelta(days=config.get(Setting.DAYS_BETWEEN_BACKUPS))).isoformat(),
|
||||
'backups': [{
|
||||
'date': date,
|
||||
'name': backup.name(),
|
||||
'size': backup.sizeString(),
|
||||
'state': backup.status(),
|
||||
'slug': backup.slug()
|
||||
}
|
||||
],
|
||||
'backups_in_google_drive': 1,
|
||||
'backups_in_home_assistant': 1,
|
||||
'size_in_home_assistant': Estimator.asSizeString(backup.size()),
|
||||
'size_in_google_drive': Estimator.asSizeString(backup.size()),
|
||||
'free_space_in_google_drive': '5.0 GB'
|
||||
})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_notification_link(updater: HaUpdater, server, time: FakeTime, global_info, supervisor: SimulatedSupervisor):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
verifyEntity(supervisor, "binary_sensor.backups_stale",
|
||||
"off", STALE_ATTRIBUTES)
|
||||
verifyEntity(supervisor, "sensor.backup_state", "waiting", {
|
||||
'friendly_name': 'Backup State',
|
||||
'last_backup': 'Never',
|
||||
'next_backup': time.now().isoformat(),
|
||||
'last_uploaded': 'Never',
|
||||
'backups': [],
|
||||
'backups_in_google_drive': 0,
|
||||
'backups_in_home_assistant': 0,
|
||||
'size_in_home_assistant': "0.0 B",
|
||||
'size_in_google_drive': "0.0 B",
|
||||
'free_space_in_google_drive': ''
|
||||
})
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
global_info.failed(Exception())
|
||||
global_info.url = "http://localhost/test"
|
||||
time.advanceDay()
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() == {
|
||||
'message': 'The add-on is having trouble making backups and needs attention. Please visit the add-on [status page](http://localhost/test) for details.',
|
||||
'title': 'Home Assistant Google Drive Backup is Having Trouble',
|
||||
'notification_id': 'backup_broken'
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_notification_clears(updater: HaUpdater, server, time: FakeTime, global_info, supervisor: SimulatedSupervisor):
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "waiting"
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
global_info.failed(Exception())
|
||||
time.advance(hours=8)
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is not None
|
||||
|
||||
global_info.success()
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_publish_for_failure(updater: HaUpdater, server, time: FakeTime, global_info: GlobalInfo, supervisor: SimulatedSupervisor):
|
||||
global_info.success()
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
time.advance(hours=8)
|
||||
global_info.failed(Exception())
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is not None
|
||||
|
||||
time.advance(hours=8)
|
||||
global_info.failed(Exception())
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is not None
|
||||
|
||||
global_info.success()
|
||||
await updater.update()
|
||||
assert supervisor.getNotification() is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_failure_logging(updater: HaUpdater, server, time: FakeTime, interceptor: RequestInterceptor):
|
||||
interceptor.setError(URL_MATCH_CORE_API, 501)
|
||||
assert getLast() is None
|
||||
await updater.update()
|
||||
assert getLast() is None
|
||||
|
||||
time.advance(minutes=1)
|
||||
await updater.update()
|
||||
assert getLast() is None
|
||||
|
||||
time.advance(minutes=5)
|
||||
await updater.update()
|
||||
assert getLast().msg == REASSURING_MESSAGE.format(501)
|
||||
|
||||
last_log = getLast()
|
||||
time.advance(minutes=5)
|
||||
await updater.update()
|
||||
assert getLast() is not last_log
|
||||
assert getLast().msg == REASSURING_MESSAGE.format(501)
|
||||
|
||||
last_log = getLast()
|
||||
interceptor.clear()
|
||||
await updater.update()
|
||||
assert getLast() is last_log
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_publish_retries(updater: HaUpdater, server: SimulationServer, time: FakeTime, backup, drive, supervisor: SimulatedSupervisor):
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") is not None
|
||||
|
||||
# Shoudlnt update after 59 minutes
|
||||
supervisor.clearEntities()
|
||||
time.advance(minutes=59)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") is None
|
||||
|
||||
# after that it should
|
||||
supervisor.clearEntities()
|
||||
time.advance(minutes=2)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") is not None
|
||||
|
||||
supervisor.clearEntities()
|
||||
await drive.delete(backup)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ignored_backups(updater: HaUpdater, time: FakeTime, server: SimulationServer, backup: Backup, supervisor: SimulatedSupervisor, coord: Coordinator, config: Config):
|
||||
config.override(Setting.IGNORE_OTHER_BACKUPS, True)
|
||||
time.advance(hours=1)
|
||||
await supervisor.createBackup({'name': "test_backup"}, date=time.now())
|
||||
await coord.sync()
|
||||
await updater.update()
|
||||
state = supervisor.getAttributes("sensor.backup_state")
|
||||
assert state["backups_in_google_drive"] == 1
|
||||
assert state["backups_in_home_assistant"] == 1
|
||||
assert len(state["backups"]) == 1
|
||||
assert state['last_backup'] == backup.date().isoformat()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_backups_old_names(updater: HaUpdater, server, backup: Backup, time: FakeTime, supervisor: SimulatedSupervisor, config: Config):
|
||||
config.override(Setting.CALL_BACKUP_SNAPSHOT, True)
|
||||
await updater.update()
|
||||
assert not updater._stale()
|
||||
assert updater._state() == "backed_up"
|
||||
verifyEntity(supervisor, "binary_sensor.snapshots_stale",
|
||||
"off", {"friendly_name": "Snapshots Stale",
|
||||
"device_class": "problem"})
|
||||
date = '1985-12-06T05:00:00+00:00'
|
||||
verifyEntity(supervisor, "sensor.snapshot_backup", "backed_up", {
|
||||
'friendly_name': 'Snapshot State',
|
||||
'last_snapshot': date,
|
||||
'snapshots': [{
|
||||
'date': date,
|
||||
'name': backup.name(),
|
||||
'size': backup.sizeString(),
|
||||
'state': backup.status(),
|
||||
'slug': backup.slug()
|
||||
}
|
||||
],
|
||||
'snapshots_in_google_drive': 1,
|
||||
'snapshots_in_home_assistant': 1,
|
||||
'snapshots_in_hassio': 1,
|
||||
'size_in_home_assistant': Estimator.asSizeString(backup.size()),
|
||||
'size_in_google_drive': Estimator.asSizeString(backup.size())
|
||||
})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_drive_free_space(updater: HaUpdater, time: FakeTime, server: SimulationServer, supervisor: SimulatedSupervisor, coord: Coordinator, config: Config):
|
||||
await updater.update()
|
||||
state = supervisor.getAttributes("sensor.backup_state")
|
||||
assert state["free_space_in_google_drive"] == ""
|
||||
|
||||
await coord.sync()
|
||||
await updater.update()
|
||||
state = supervisor.getAttributes("sensor.backup_state")
|
||||
assert state["free_space_in_google_drive"] == "5.0 GB"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stale_backup_is_error(updater: HaUpdater, server, backup: Backup, time: FakeTime, supervisor: SimulatedSupervisor, config: Config):
|
||||
config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "backed_up"
|
||||
|
||||
time.advance(days=1)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "backed_up"
|
||||
|
||||
time.advance(days=1)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "error"
|
||||
|
||||
time.advance(days=1)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "error"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stale_backup_ignores_pending(updater: HaUpdater, server, backup: Backup, time: FakeTime, supervisor: SimulatedSupervisor, config: Config, coord: Coordinator):
|
||||
config.override(Setting.DAYS_BETWEEN_BACKUPS, 1)
|
||||
|
||||
config.override(Setting.NEW_BACKUP_TIMEOUT_SECONDS, 1)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "backed_up"
|
||||
|
||||
time.advance(days=2)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "error"
|
||||
|
||||
async with supervisor._backup_inner_lock:
|
||||
await coord.sync()
|
||||
assert coord.getBackup("pending") is not None
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "error"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stale_backups_fine_for_no_creation(updater: HaUpdater, server, backup: Backup, time: FakeTime, supervisor: SimulatedSupervisor, config: Config, coord: Coordinator):
|
||||
config.override(Setting.DAYS_BETWEEN_BACKUPS, 0)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "backed_up"
|
||||
|
||||
# backups shouldn't become stale because the addon doesn't create them.
|
||||
time.advance(days=100)
|
||||
await updater.update()
|
||||
assert supervisor.getEntity("sensor.backup_state") == "backed_up"
|
||||
|
||||
|
||||
def verifyEntity(backend: SimulatedSupervisor, name, state, attributes):
|
||||
assert backend.getEntity(name) == state
|
||||
assert backend.getAttributes(name) == attributes
|
||||
63
hassio-google-drive-backup/tests/test_jsonfilesaver.py
Normal file
63
hassio-google-drive-backup/tests/test_jsonfilesaver.py
Normal file
@@ -0,0 +1,63 @@
|
||||
|
||||
|
||||
from backup.file import JsonFileSaver
|
||||
from os.path import exists, join
|
||||
from os import remove
|
||||
import pytest
|
||||
import json
|
||||
|
||||
TEST_DATA = {
|
||||
'info': "and the value",
|
||||
'some': 3
|
||||
}
|
||||
|
||||
|
||||
def readfile(path):
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basic(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
backup_path = join(tmpdir, "test.json.backup")
|
||||
|
||||
assert not JsonFileSaver.exists(path)
|
||||
JsonFileSaver.write(path, TEST_DATA)
|
||||
assert JsonFileSaver.exists(path)
|
||||
assert readfile(path) == TEST_DATA
|
||||
assert readfile(backup_path) == TEST_DATA
|
||||
assert JsonFileSaver.read(path) == TEST_DATA
|
||||
|
||||
JsonFileSaver.delete(path)
|
||||
assert not exists(path)
|
||||
assert not exists(backup_path)
|
||||
assert not JsonFileSaver.exists(path)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_file_deleted(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
JsonFileSaver.write(path, TEST_DATA)
|
||||
remove(path)
|
||||
assert JsonFileSaver.read(path) == TEST_DATA
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_backup_deleted(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
backup_path = join(tmpdir, "test.json.backup")
|
||||
JsonFileSaver.write(path, TEST_DATA)
|
||||
remove(backup_path)
|
||||
assert JsonFileSaver.read(path) == TEST_DATA
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_decode_error(tmpdir: str) -> None:
|
||||
path = join(tmpdir, "test.json")
|
||||
JsonFileSaver.write(path, TEST_DATA)
|
||||
with open(path, "w"):
|
||||
# emptys the file contents
|
||||
pass
|
||||
with open(path) as f:
|
||||
assert len(f.read()) == 0
|
||||
assert JsonFileSaver.read(path) == TEST_DATA
|
||||
1349
hassio-google-drive-backup/tests/test_model.py
Normal file
1349
hassio-google-drive-backup/tests/test_model.py
Normal file
File diff suppressed because it is too large
Load Diff
26
hassio-google-drive-backup/tests/test_rangelookup.py
Normal file
26
hassio-google-drive-backup/tests/test_rangelookup.py
Normal file
@@ -0,0 +1,26 @@
|
||||
from backup.util import RangeLookup
|
||||
|
||||
|
||||
def test_lookup():
|
||||
data = [1, 3, 5]
|
||||
lookup = RangeLookup(data, lambda x: x)
|
||||
assert list(lookup.matches(-1, 0)) == []
|
||||
assert list(lookup.matches(6, 7)) == []
|
||||
assert list(lookup.matches(2, 2)) == []
|
||||
assert list(lookup.matches(4, 4)) == []
|
||||
assert list(lookup.matches(6, 6)) == []
|
||||
|
||||
assert list(lookup.matches(0, 6)) == [1, 3, 5]
|
||||
assert list(lookup.matches(1, 5)) == [1, 3, 5]
|
||||
|
||||
assert list(lookup.matches(1, 3)) == [1, 3]
|
||||
assert list(lookup.matches(0, 4)) == [1, 3]
|
||||
assert list(lookup.matches(3, 5)) == [3, 5]
|
||||
assert list(lookup.matches(2, 6)) == [3, 5]
|
||||
|
||||
assert list(lookup.matches(0, 2)) == [1]
|
||||
assert list(lookup.matches(1, 1)) == [1]
|
||||
assert list(lookup.matches(3, 3)) == [3]
|
||||
assert list(lookup.matches(2, 4)) == [3]
|
||||
assert list(lookup.matches(5, 5)) == [5]
|
||||
assert list(lookup.matches(4, 5)) == [5]
|
||||
46
hassio-google-drive-backup/tests/test_resolver.py
Normal file
46
hassio-google-drive-backup/tests/test_resolver.py
Normal file
@@ -0,0 +1,46 @@
|
||||
import pytest
|
||||
import socket
|
||||
|
||||
from backup.config import Config, Setting
|
||||
from backup.util import Resolver
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_name_server(resolver: Resolver, config: Config):
|
||||
assert resolver._alt_dns.nameservers == ["8.8.8.8", "8.8.4.4"]
|
||||
assert resolver._resolver is resolver._original_dns
|
||||
config.override(Setting.ALTERNATE_DNS_SERVERS, "")
|
||||
resolver.updateConfig()
|
||||
assert resolver._resolver is resolver._alt_dns
|
||||
|
||||
# make sure the value is cached
|
||||
prev = resolver._alt_dns
|
||||
resolver.updateConfig()
|
||||
assert resolver._alt_dns is prev
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_toggle(resolver: Resolver):
|
||||
assert resolver._resolver is resolver._original_dns
|
||||
resolver.toggle()
|
||||
assert resolver._resolver is resolver._alt_dns
|
||||
resolver.toggle()
|
||||
assert resolver._resolver is resolver._original_dns
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_hard_resolve(resolver: Resolver, config: Config):
|
||||
expected = [{
|
||||
'family': 0,
|
||||
'flags': socket.AddressInfo.AI_NUMERICHOST,
|
||||
'port': 1234,
|
||||
'proto': 0,
|
||||
'host': "1.2.3.4",
|
||||
'hostname': "www.googleapis.com"
|
||||
}]
|
||||
config.override(Setting.DRIVE_IPV4, "1.2.3.4")
|
||||
assert await resolver.resolve("www.googleapis.com", 1234, 0) == expected
|
||||
resolver.toggle()
|
||||
assert await resolver.resolve("www.googleapis.com", 1234, 0) == expected
|
||||
resolver.toggle()
|
||||
assert await resolver.resolve("www.googleapis.com", 1234, 0) == expected
|
||||
442
hassio-google-drive-backup/tests/test_scheme.py
Normal file
442
hassio-google-drive-backup/tests/test_scheme.py
Normal file
@@ -0,0 +1,442 @@
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
from dateutil.tz import tzutc
|
||||
from pytest import fail
|
||||
|
||||
from backup.model import GenConfig, GenerationalScheme, DummyBackup, Backup
|
||||
from backup.time import Time
|
||||
|
||||
|
||||
def test_timezone(time) -> None:
|
||||
assert time.local_tz is not None
|
||||
|
||||
|
||||
def test_trivial(time) -> None:
|
||||
config = GenConfig(days=1)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
makeBackup("single", time.local(1928, 12, 6))
|
||||
]
|
||||
|
||||
assert scheme.getOldest(backups)[1].date() == time.local(1928, 12, 6)
|
||||
|
||||
|
||||
def test_trivial_empty(time):
|
||||
config = GenConfig(days=1)
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
assert scheme.getOldest([])[1] is None
|
||||
|
||||
|
||||
def test_trivial_oldest(time: Time) -> None:
|
||||
config = GenConfig(days=1)
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 12, 6, 10)),
|
||||
makeBackup("test", time.local(1985, 12, 6, 12)),
|
||||
makeBackup("test", time.local(1985, 12, 6, 13))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1985, 12, 6, 10),
|
||||
time.local(1985, 12, 6, 12),
|
||||
time.local(1985, 12, 6, 13)
|
||||
])
|
||||
|
||||
|
||||
def test_duplicate_weeks(time):
|
||||
config = GenConfig(weeks=1, day_of_week='wed')
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 12, 5)),
|
||||
makeBackup("test", time.local(1985, 12, 4)),
|
||||
makeBackup("test", time.local(1985, 12, 1)),
|
||||
makeBackup("test", time.local(1985, 12, 2))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1985, 12, 1),
|
||||
time.local(1985, 12, 2),
|
||||
time.local(1985, 12, 5),
|
||||
time.local(1985, 12, 4)
|
||||
])
|
||||
|
||||
|
||||
def test_duplicate_months(time) -> None:
|
||||
config = GenConfig(months=2, day_of_month=15)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 12, 6)),
|
||||
makeBackup("test", time.local(1985, 12, 15)),
|
||||
makeBackup("test", time.local(1985, 11, 20)),
|
||||
makeBackup("test", time.local(1985, 11, 15))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1985, 11, 20),
|
||||
time.local(1985, 12, 6),
|
||||
time.local(1985, 11, 15),
|
||||
time.local(1985, 12, 15)
|
||||
])
|
||||
|
||||
|
||||
def test_duplicate_years(time):
|
||||
config = GenConfig(years=2, day_of_year=1)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 12, 31)),
|
||||
makeBackup("test", time.local(1985, 1, 1)),
|
||||
makeBackup("test", time.local(1984, 12, 31)),
|
||||
makeBackup("test", time.local(1984, 1, 1))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1984, 12, 31),
|
||||
time.local(1985, 12, 31),
|
||||
time.local(1984, 1, 1),
|
||||
time.local(1985, 1, 1)
|
||||
])
|
||||
|
||||
|
||||
def test_removal_order(time) -> None:
|
||||
config = GenConfig(days=5, weeks=2, months=2, years=2,
|
||||
day_of_week='mon', day_of_month=15, day_of_year=1)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=0)
|
||||
|
||||
backups = [
|
||||
# 5 days, week 1
|
||||
makeBackup("test", time.local(1985, 12, 7)), # day 1
|
||||
makeBackup("test", time.local(1985, 12, 6)), # day 2
|
||||
makeBackup("test", time.local(1985, 12, 5)), # day 3
|
||||
makeBackup("test", time.local(1985, 12, 4)), # day 4
|
||||
makeBackup("test", time.local(1985, 12, 3)), # day 5
|
||||
|
||||
makeBackup("test", time.local(1985, 12, 1)), # 1st week pref
|
||||
|
||||
# week 2
|
||||
makeBackup("test", time.local(1985, 11, 25)), # 1st month pref
|
||||
|
||||
# month2
|
||||
makeBackup("test", time.local(1985, 11, 15)), # 2nd month pref
|
||||
|
||||
# year 1
|
||||
makeBackup("test", time.local(1985, 1, 1)), # 1st year preference
|
||||
makeBackup("test", time.local(1985, 1, 2)),
|
||||
|
||||
# year 2
|
||||
makeBackup("test", time.local(1984, 6, 1)), # 2nd year pref
|
||||
makeBackup("test", time.local(1984, 7, 1)),
|
||||
|
||||
# year 3
|
||||
makeBackup("test", time.local(1983, 1, 1)),
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1983, 1, 1),
|
||||
time.local(1984, 7, 1),
|
||||
time.local(1985, 1, 2),
|
||||
|
||||
time.local(1984, 6, 1),
|
||||
time.local(1985, 1, 1),
|
||||
time.local(1985, 11, 15),
|
||||
time.local(1985, 11, 25),
|
||||
time.local(1985, 12, 1),
|
||||
time.local(1985, 12, 3),
|
||||
time.local(1985, 12, 4),
|
||||
time.local(1985, 12, 5),
|
||||
time.local(1985, 12, 6),
|
||||
time.local(1985, 12, 7)
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.timeout(60)
|
||||
def test_simulate_daily_backup_for_4_years(time):
|
||||
config = GenConfig(days=4, weeks=4, months=4, years=4,
|
||||
day_of_week='mon', day_of_month=1, day_of_year=1)
|
||||
scheme = GenerationalScheme(time, config, count=16)
|
||||
backups = simulate(time.local(2019, 1, 1),
|
||||
time.local(2022, 12, 31),
|
||||
scheme)
|
||||
assertRemovalOrder(GenerationalScheme(time, config, count=0), backups, [
|
||||
# 4 years
|
||||
time.local(2019, 1, 1),
|
||||
time.local(2020, 1, 1),
|
||||
time.local(2021, 1, 1),
|
||||
time.local(2022, 1, 1),
|
||||
|
||||
# 4 months
|
||||
time.local(2022, 9, 1),
|
||||
time.local(2022, 10, 1),
|
||||
time.local(2022, 11, 1),
|
||||
time.local(2022, 12, 1),
|
||||
|
||||
# 4 weeks
|
||||
time.local(2022, 12, 5),
|
||||
time.local(2022, 12, 12),
|
||||
time.local(2022, 12, 19),
|
||||
time.local(2022, 12, 26),
|
||||
|
||||
# 4 days
|
||||
time.local(2022, 12, 28),
|
||||
time.local(2022, 12, 29),
|
||||
time.local(2022, 12, 30),
|
||||
time.local(2022, 12, 31)
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.timeout(60)
|
||||
def test_simulate_agressive_daily_backup_for_4_years(time):
|
||||
config = GenConfig(days=4, weeks=4, months=4, years=4,
|
||||
day_of_week='mon', day_of_month=1, day_of_year=1, aggressive=True)
|
||||
scheme = GenerationalScheme(time, config, count=16)
|
||||
backups = simulate(time.local(2019, 1, 1),
|
||||
time.local(2022, 12, 31),
|
||||
scheme)
|
||||
|
||||
assertRemovalOrder(GenerationalScheme(time, config, count=0), backups, [
|
||||
# 4 years
|
||||
time.local(2019, 1, 1),
|
||||
time.local(2020, 1, 1),
|
||||
time.local(2021, 1, 1),
|
||||
time.local(2022, 1, 1),
|
||||
|
||||
# 4 months
|
||||
time.local(2022, 9, 1),
|
||||
time.local(2022, 10, 1),
|
||||
time.local(2022, 11, 1),
|
||||
time.local(2022, 12, 1),
|
||||
|
||||
# 4 weeks
|
||||
time.local(2022, 12, 5),
|
||||
time.local(2022, 12, 12),
|
||||
time.local(2022, 12, 19),
|
||||
time.local(2022, 12, 26),
|
||||
|
||||
# 4 days
|
||||
time.local(2022, 12, 28),
|
||||
time.local(2022, 12, 29),
|
||||
time.local(2022, 12, 30),
|
||||
time.local(2022, 12, 31),
|
||||
])
|
||||
|
||||
|
||||
def test_count_limit(time):
|
||||
config = GenConfig(years=2, day_of_year=1)
|
||||
scheme = GenerationalScheme(time, config, count=1)
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 1, 1)),
|
||||
makeBackup("test", time.local(1984, 1, 1))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1984, 1, 1)
|
||||
])
|
||||
|
||||
|
||||
def test_aggressive_removal_below_limit(time):
|
||||
config = GenConfig(years=2, day_of_year=1, aggressive=True)
|
||||
scheme = GenerationalScheme(time, config, count=5)
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 1, 1)),
|
||||
makeBackup("test", time.local(1985, 1, 2))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1985, 1, 2)
|
||||
])
|
||||
|
||||
|
||||
def test_aggressive_removal_at_limit_ok(time):
|
||||
config = GenConfig(years=2, day_of_year=1, aggressive=True)
|
||||
scheme = GenerationalScheme(time, config, count=2)
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 1, 1)),
|
||||
makeBackup("test", time.local(1984, 1, 1))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [])
|
||||
|
||||
|
||||
def test_aggressive_removal_over_limit(time):
|
||||
config = GenConfig(years=2, day_of_year=1, aggressive=True)
|
||||
scheme = GenerationalScheme(time, config, count=2)
|
||||
backups = [
|
||||
makeBackup("test", time.local(1985, 1, 1)),
|
||||
makeBackup("test", time.local(1984, 1, 1)),
|
||||
makeBackup("test", time.local(1983, 1, 1)),
|
||||
makeBackup("test", time.local(1983, 1, 2))
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(1983, 1, 1),
|
||||
time.local(1983, 1, 2)
|
||||
])
|
||||
|
||||
|
||||
def test_removal_order_week(time: Time):
|
||||
config = GenConfig(weeks=1, day_of_week='wed', aggressive=True)
|
||||
scheme = GenerationalScheme(time, config, count=1)
|
||||
backups = [
|
||||
makeBackup("test", time.local(2019, 10, 28)),
|
||||
makeBackup("test", time.local(2019, 10, 29)),
|
||||
makeBackup("test", time.local(2019, 10, 30, 1)),
|
||||
makeBackup("test", time.local(2019, 10, 30, 2)),
|
||||
makeBackup("test", time.local(2019, 10, 31)),
|
||||
makeBackup("test", time.local(2019, 11, 1)),
|
||||
makeBackup("test", time.local(2019, 11, 2)),
|
||||
makeBackup("test", time.local(2019, 11, 3)),
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(2019, 10, 28),
|
||||
time.local(2019, 10, 29),
|
||||
time.local(2019, 10, 30, 1),
|
||||
time.local(2019, 10, 31),
|
||||
time.local(2019, 11, 1),
|
||||
time.local(2019, 11, 2),
|
||||
time.local(2019, 11, 3)
|
||||
])
|
||||
|
||||
|
||||
def test_removal_order_month(time):
|
||||
config = GenConfig(months=1, day_of_month=20, aggressive=True)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=1)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(2019, 1, 1)),
|
||||
makeBackup("test", time.local(2019, 1, 2)),
|
||||
makeBackup("test", time.local(2019, 1, 20, 1)),
|
||||
makeBackup("test", time.local(2019, 1, 20, 2)),
|
||||
makeBackup("test", time.local(2019, 1, 21)),
|
||||
makeBackup("test", time.local(2019, 1, 25)),
|
||||
makeBackup("test", time.local(2019, 1, 26)),
|
||||
makeBackup("test", time.local(2019, 1, 27)),
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(2019, 1, 1),
|
||||
time.local(2019, 1, 2),
|
||||
time.local(2019, 1, 20, 1),
|
||||
time.local(2019, 1, 21),
|
||||
time.local(2019, 1, 25),
|
||||
time.local(2019, 1, 26),
|
||||
time.local(2019, 1, 27)
|
||||
])
|
||||
|
||||
|
||||
def test_removal_order_many_months(time):
|
||||
config = GenConfig(months=70, day_of_month=20, aggressive=True)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=10)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(2019, 7, 20)), # preferred
|
||||
makeBackup("test", time.local(2018, 7, 18)), # preferred
|
||||
makeBackup("test", time.local(2018, 7, 21)),
|
||||
makeBackup("test", time.local(2017, 1, 19)),
|
||||
makeBackup("test", time.local(2017, 1, 20)), # preferred
|
||||
makeBackup("test", time.local(2017, 1, 31)),
|
||||
makeBackup("test", time.local(2016, 12, 1)), # preferred
|
||||
makeBackup("test", time.local(2014, 1, 31)),
|
||||
makeBackup("test", time.local(2014, 1, 1)), # preferred
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(2014, 1, 31),
|
||||
time.local(2017, 1, 19),
|
||||
time.local(2017, 1, 31),
|
||||
time.local(2018, 7, 21),
|
||||
])
|
||||
|
||||
|
||||
def test_removal_order_years(time):
|
||||
config = GenConfig(years=2, day_of_year=15, aggressive=True)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=10)
|
||||
|
||||
backups = [
|
||||
makeBackup("test", time.local(2019, 2, 15)),
|
||||
makeBackup("test", time.local(2019, 1, 15)), # keep
|
||||
makeBackup("test", time.local(2018, 1, 14)),
|
||||
makeBackup("test", time.local(2018, 1, 15)), # keep
|
||||
makeBackup("test", time.local(2018, 1, 16)),
|
||||
makeBackup("test", time.local(2017, 1, 15)),
|
||||
]
|
||||
assertRemovalOrder(scheme, backups, [
|
||||
time.local(2017, 1, 15),
|
||||
time.local(2018, 1, 14),
|
||||
time.local(2018, 1, 16),
|
||||
time.local(2019, 2, 15),
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ignored_generational_labels(time):
|
||||
config = GenConfig(days=2)
|
||||
|
||||
scheme = GenerationalScheme(time, config, count=10)
|
||||
backup1 = makeBackup("test", time.local(2019, 2, 15))
|
||||
backup2 = makeBackup("test", time.local(2019, 2, 14))
|
||||
backup3 = makeBackup("test", time.local(2019, 2, 13), ignore=True)
|
||||
backups = [backup1, backup2, backup3]
|
||||
scheme.handleNaming(backups)
|
||||
assert backup1.getStatusDetail() == ['Day 1 of 2']
|
||||
assert backup2.getStatusDetail() == ['Day 2 of 2']
|
||||
assert backup3.getStatusDetail() is None
|
||||
|
||||
|
||||
def getRemovalOrder(scheme, toCheck):
|
||||
backups = list(toCheck)
|
||||
removed = []
|
||||
while True:
|
||||
oldest = scheme.getOldest(backups)
|
||||
if not oldest:
|
||||
break
|
||||
removed.append(oldest.date())
|
||||
backups.remove(oldest)
|
||||
return removed
|
||||
|
||||
|
||||
def assertRemovalOrder(scheme, toCheck, expected):
|
||||
backups = list(toCheck)
|
||||
removed = []
|
||||
index = 0
|
||||
time = scheme.time
|
||||
while True:
|
||||
reason, oldest = scheme.getOldest(backups)
|
||||
if index >= len(expected):
|
||||
if oldest is not None:
|
||||
fail("at index {0}, expected 'None' but got {1}".format(
|
||||
index, time.toLocal(oldest.date())))
|
||||
break
|
||||
if oldest.date() != expected[index]:
|
||||
fail("at index {0}, expected {1} but got {2}".format(
|
||||
index, time.toLocal(expected[index]), time.toLocal(oldest.date())))
|
||||
removed.append(oldest.date())
|
||||
backups.remove(oldest)
|
||||
index += 1
|
||||
return removed
|
||||
|
||||
|
||||
def makeBackup(slug, date, name=None, ignore=False) -> Backup:
|
||||
if not name:
|
||||
name = slug
|
||||
return DummyBackup(name, date.astimezone(tzutc()), "src", slug, ignore=ignore)
|
||||
|
||||
|
||||
def simulate(start: datetime, end: datetime, scheme: GenerationalScheme, backups=[]):
|
||||
today = start
|
||||
while today <= end:
|
||||
backups.append(makeBackup("test", today))
|
||||
test = scheme.getOldest(backups)
|
||||
if test is None:
|
||||
pass
|
||||
reason, oldest = test
|
||||
while oldest is not None:
|
||||
backups.remove(oldest)
|
||||
test = scheme.getOldest(backups)
|
||||
if test is None:
|
||||
pass
|
||||
reason, oldest = test
|
||||
today = today + timedelta(hours=27)
|
||||
today = scheme.time.local(today.year, today.month, today.day)
|
||||
return backups
|
||||
59
hassio-google-drive-backup/tests/test_server.py
Normal file
59
hassio-google-drive-backup/tests/test_server.py
Normal file
@@ -0,0 +1,59 @@
|
||||
|
||||
import pytest
|
||||
from yarl import URL
|
||||
from dev.simulationserver import SimulationServer
|
||||
from aiohttp import ClientSession, hdrs
|
||||
from backup.config import Config
|
||||
from .faketime import FakeTime
|
||||
import json
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_refresh_known_error(server: SimulationServer, session: ClientSession, config: Config, server_url: URL):
|
||||
async with session.post(server_url.with_path("drive/refresh"), json={"blah": "blah"}) as r:
|
||||
assert r.status == 503
|
||||
assert await r.json() == {
|
||||
'error': "Required key 'refresh_token' was missing from the request payload"
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_refresh_unknown_error(server: SimulationServer, session: ClientSession, config: Config, server_url: URL):
|
||||
async with session.post(server_url.with_path("drive/refresh"), data={}) as r:
|
||||
assert r.status == 500
|
||||
assert len((await r.json())["error"]) > 0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_old_auth_method(server: SimulationServer, session: ClientSession, server_url: URL):
|
||||
start_auth = server_url.with_path("drive/authorize").with_query({
|
||||
"redirectbacktoken": "http://example.com"
|
||||
})
|
||||
|
||||
# Verify the redirect to Drive's oauthv2 endpoint
|
||||
async with session.get(start_auth, data={}, allow_redirects=False) as r:
|
||||
assert r.status == 303
|
||||
redirect = URL(r.headers[hdrs.LOCATION])
|
||||
assert redirect.path == "/o/oauth2/v2/auth"
|
||||
assert redirect.host == "localhost"
|
||||
|
||||
# Verify the redirect back to the server's oauth page
|
||||
async with session.get(redirect, data={}, allow_redirects=False) as r:
|
||||
assert r.status == 303
|
||||
redirect = URL(r.headers[hdrs.LOCATION])
|
||||
assert redirect.path == "/drive/authorize"
|
||||
assert redirect.host == "localhost"
|
||||
|
||||
# Verify we gte redirected back to the addon (example.com) with creds
|
||||
async with session.get(redirect, data={}, allow_redirects=False) as r:
|
||||
assert r.status == 303
|
||||
redirect = URL(r.headers[hdrs.LOCATION])
|
||||
assert redirect.query.get("creds") is not None
|
||||
assert redirect.host == "example.com"
|
||||
|
||||
|
||||
async def test_log_to_firestore(time: FakeTime, server: SimulationServer, session: ClientSession, server_url: URL):
|
||||
data = {"info": "testing"}
|
||||
async with session.post(server_url.with_path("logerror"), data=json.dumps(data)) as r:
|
||||
assert r.status == 200
|
||||
assert server._authserver.error_store.last_error is not None
|
||||
assert server._authserver.error_store.last_error['report'] == data
|
||||
37
hassio-google-drive-backup/tests/test_settings.py
Normal file
37
hassio-google-drive-backup/tests/test_settings.py
Normal file
@@ -0,0 +1,37 @@
|
||||
from backup.config import Setting, addon_config, _CONFIG
|
||||
|
||||
|
||||
def test_defaults():
|
||||
# all settings should have a default
|
||||
for setting in Setting:
|
||||
if setting is not Setting.DEBUGGER_PORT:
|
||||
assert setting.default() is not None, setting.value + " has no default"
|
||||
|
||||
|
||||
def test_validators():
|
||||
# all defaults shoudl have a validator
|
||||
for setting in Setting:
|
||||
assert setting.validator() is not None, setting.value + " has no validator"
|
||||
|
||||
|
||||
def test_defaults_are_valid():
|
||||
# all defaults values should be valid and validate to their own value
|
||||
for setting in Setting:
|
||||
assert setting.validator().validate(setting.default()) == setting.default()
|
||||
|
||||
|
||||
def test_setting_configuration():
|
||||
# All settings in the default config should have the exact same parse expression
|
||||
for setting in Setting:
|
||||
if setting.value in addon_config["schema"]:
|
||||
if setting != Setting.GENERATIONAL_DAY_OF_WEEK:
|
||||
assert _CONFIG[setting] == addon_config["schema"][setting.value], setting.value
|
||||
|
||||
|
||||
def test_settings_present():
|
||||
all = set()
|
||||
for setting in Setting:
|
||||
all.add(setting.value)
|
||||
|
||||
for setting in addon_config["schema"]:
|
||||
assert setting in all, setting + " not present in config.json"
|
||||
22
hassio-google-drive-backup/tests/test_starter.py
Normal file
22
hassio-google-drive-backup/tests/test_starter.py
Normal file
@@ -0,0 +1,22 @@
|
||||
import pytest
|
||||
import os
|
||||
from backup.module import MainModule, BaseModule
|
||||
from backup.starter import Starter
|
||||
from backup.config import Config, Setting
|
||||
from injector import Injector
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bootstrap_requirements(cleandir):
|
||||
# This just verifies we're able to satisfy starter's injector requirements.
|
||||
injector = Injector([BaseModule(), MainModule()])
|
||||
config = injector.get(Config)
|
||||
config.override(Setting.DATA_CACHE_FILE_PATH, os.path.join(cleandir, "data_cache.json"))
|
||||
injector.get(Starter)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_start_and_stop(injector):
|
||||
starter = injector.get(Starter)
|
||||
await starter.start()
|
||||
await starter.stop()
|
||||
64
hassio-google-drive-backup/tests/test_timezone.py
Normal file
64
hassio-google-drive-backup/tests/test_timezone.py
Normal file
@@ -0,0 +1,64 @@
|
||||
import datetime
|
||||
import os
|
||||
from backup.time import Time, _infer_timezone_from_env, _infer_timezone_from_name, _infer_timezone_from_offset, _infer_timezone_from_system
|
||||
from .faketime import FakeTime
|
||||
|
||||
|
||||
def test_parse() -> None:
|
||||
time = Time.parse("1985-12-06 01:01:01.0001")
|
||||
assert str(time) == "1985-12-06 01:01:01.000100+00:00"
|
||||
|
||||
time = Time.parse("1985-12-06 01:01:01.0001+01:00")
|
||||
assert str(time) == "1985-12-06 01:01:01.000100+01:00"
|
||||
|
||||
|
||||
def test_parse_timezone(time) -> None:
|
||||
assertUtc(Time.parse("1985-12-06"))
|
||||
assertUtc(Time.parse("1985-12-06 21:21"))
|
||||
assertUtc(Time.parse("1985-12-06 21:21+00:00"))
|
||||
assertUtc(Time.parse("1985-12-06 21:21 UTC"))
|
||||
assertUtc(Time.parse("1985-12-06 21:21 GGGR"))
|
||||
|
||||
assertOffset(Time.parse("1985-12-06 21:21+10"), 10)
|
||||
assertOffset(Time.parse("1985-12-06 21:21-10"), -10)
|
||||
|
||||
|
||||
def assertOffset(time, hours):
|
||||
assert time.tzinfo.utcoffset(time) == datetime.timedelta(hours=hours)
|
||||
|
||||
|
||||
def assertUtc(time):
|
||||
assertOffset(time, 0)
|
||||
|
||||
|
||||
def test_common_timezones(time: FakeTime):
|
||||
assert _infer_timezone_from_system() is not None
|
||||
assert _infer_timezone_from_name() is not None
|
||||
assert _infer_timezone_from_offset() is not None
|
||||
assert _infer_timezone_from_env() is None
|
||||
|
||||
os.environ["TZ"] = "America/Denver"
|
||||
assert _infer_timezone_from_env().tzname(None) == "America/Denver"
|
||||
|
||||
os.environ["TZ"] = "Australia/Brisbane"
|
||||
assert _infer_timezone_from_env().tzname(None) == "Australia/Brisbane"
|
||||
|
||||
tzs = {"SYSTEM": _infer_timezone_from_system(),
|
||||
"ENV": _infer_timezone_from_env(),
|
||||
"OFFSET": _infer_timezone_from_offset(),
|
||||
"NAME": _infer_timezone_from_name()}
|
||||
|
||||
for name, tz in tzs.items():
|
||||
print(name)
|
||||
time.setTimeZone(tz)
|
||||
time.now()
|
||||
time.nowLocal()
|
||||
time.localize(datetime.datetime(1985, 12, 6))
|
||||
time.local(1985, 12, 6)
|
||||
time.toLocal(time.now())
|
||||
time.toUtc(time.nowLocal())
|
||||
|
||||
|
||||
def test_system_timezone(time: FakeTime):
|
||||
tz = _infer_timezone_from_system()
|
||||
assert tz.tzname(time.now()) == "UTC"
|
||||
1180
hassio-google-drive-backup/tests/test_uiserver.py
Normal file
1180
hassio-google-drive-backup/tests/test_uiserver.py
Normal file
File diff suppressed because it is too large
Load Diff
47
hassio-google-drive-backup/tests/test_version.py
Normal file
47
hassio-google-drive-backup/tests/test_version.py
Normal file
@@ -0,0 +1,47 @@
|
||||
from backup.config import Version
|
||||
|
||||
|
||||
def test_default():
|
||||
assert Version.default() == Version.default()
|
||||
assert not Version.default() > Version.default()
|
||||
assert not Version.default() < Version.default()
|
||||
assert not Version.default() != Version.default()
|
||||
assert Version.default() >= Version.default()
|
||||
assert Version.default() <= Version.default()
|
||||
|
||||
|
||||
def test_version():
|
||||
assert Version(1, 2, 3) == Version(1, 2, 3)
|
||||
assert Version(1, 2, 3) >= Version(1, 2, 3)
|
||||
assert Version(1, 2, 3) <= Version(1, 2, 3)
|
||||
assert Version(1, 2, 3) > Version(1, 2)
|
||||
assert Version(1) < Version(2)
|
||||
assert Version(2) > Version(1)
|
||||
assert Version(1) != Version(2)
|
||||
assert Version(1, 2) > Version(1)
|
||||
assert Version(1) < Version(1, 2)
|
||||
|
||||
|
||||
def test_parse():
|
||||
assert Version.parse("1.0") == Version(1, 0)
|
||||
assert Version.parse("1.2.3") == Version(1, 2, 3)
|
||||
|
||||
|
||||
def test_parse_staging():
|
||||
assert Version.parse("1.0.staging.1") == Version(1, 0, 1)
|
||||
assert Version.parse("1.0.staging.1").staging
|
||||
assert Version.parse("1.0.staging.1") > Version(1.0)
|
||||
assert Version.parse("1.2.3") == Version(1, 2, 3)
|
||||
|
||||
|
||||
def test_junk_strings():
|
||||
assert Version.parse("1-.2.3.1") == Version(1, 2, 3, 1)
|
||||
assert Version.parse("ignore-1.2.3.1") == Version(1, 2, 3, 1)
|
||||
assert Version.parse("1.2.ignore.this.text.3.and...andhere.too.1") == Version(1, 2, 3, 1)
|
||||
|
||||
|
||||
def test_broken_versions():
|
||||
assert Version.parse("") == Version.default()
|
||||
assert Version.parse(".") == Version.default()
|
||||
assert Version.parse("empty") == Version.default()
|
||||
assert Version.parse("no.version.here") == Version.default()
|
||||
119
hassio-google-drive-backup/tests/test_watcher.py
Normal file
119
hassio-google-drive-backup/tests/test_watcher.py
Normal file
@@ -0,0 +1,119 @@
|
||||
from backup.watcher import Watcher
|
||||
from backup.config import Config, Setting, CreateOptions
|
||||
from backup.ha import HaSource
|
||||
from os.path import join
|
||||
from .faketime import FakeTime
|
||||
from asyncio import sleep
|
||||
import pytest
|
||||
import os
|
||||
|
||||
TEST_FILE_NAME = "test.tar"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_watcher_trigger_on_backup(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await watcher.start()
|
||||
assert not await watcher.check()
|
||||
watcher.noticed_change_signal.clear()
|
||||
await simulateBackup(config, TEST_FILE_NAME, ha, time)
|
||||
await watcher.noticed_change_signal.wait()
|
||||
time.advance(minutes=11)
|
||||
assert await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_disable_watching(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
config.override(Setting.WATCH_BACKUP_DIRECTORY, False)
|
||||
await watcher.start()
|
||||
assert not await watcher.check()
|
||||
await simulateBackup(config, TEST_FILE_NAME, ha, time)
|
||||
await sleep(1)
|
||||
time.advance(minutes=11)
|
||||
assert not await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_watcher_doesnt_trigger_on_no_backup(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await watcher.start()
|
||||
assert not await watcher.check()
|
||||
file = join(config.get(Setting.BACKUP_DIRECTORY_PATH), TEST_FILE_NAME)
|
||||
watcher.noticed_change_signal.clear()
|
||||
with open(file, "w"):
|
||||
pass
|
||||
await watcher.noticed_change_signal.wait()
|
||||
time.advance(minutes=11)
|
||||
assert not await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_watcher_below_wait_threshold(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await watcher.start()
|
||||
assert not await watcher.check()
|
||||
for x in range(10):
|
||||
watcher.noticed_change_signal.clear()
|
||||
await simulateBackup(config, f"{TEST_FILE_NAME}.{x}", ha, time)
|
||||
await watcher.noticed_change_signal.wait()
|
||||
time.advance(seconds=9)
|
||||
assert not await watcher.check()
|
||||
time.advance(minutes=11)
|
||||
assert await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_watcher_triggers_for_deletes(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await simulateBackup(config, TEST_FILE_NAME, ha, time)
|
||||
|
||||
await watcher.start()
|
||||
assert not await watcher.check()
|
||||
watcher.noticed_change_signal.clear()
|
||||
os.remove(join(config.get(Setting.BACKUP_DIRECTORY_PATH), TEST_FILE_NAME))
|
||||
await watcher.noticed_change_signal.wait()
|
||||
|
||||
time.advance(seconds=30)
|
||||
assert await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_moves_out_trigger(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await simulateBackup(config, TEST_FILE_NAME, ha, time)
|
||||
await watcher.start()
|
||||
watcher.noticed_change_signal.clear()
|
||||
os.mkdir(join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir"))
|
||||
os.rename(join(config.get(Setting.BACKUP_DIRECTORY_PATH), TEST_FILE_NAME), join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir", TEST_FILE_NAME))
|
||||
await watcher.noticed_change_signal.wait()
|
||||
time.advance(minutes=11)
|
||||
assert await watcher.check()
|
||||
|
||||
# Check if move ins are really necessary
|
||||
# @pytest.mark.asyncio
|
||||
# async def test_moves_in_trigger(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
# os.mkdir(join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir"))
|
||||
# await simulateBackup(config, "subdir/" + TEST_FILE_NAME, ha, time)
|
||||
# await watcher.start()
|
||||
# watcher.noticed_change_signal.clear()
|
||||
# os.rename(join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir", TEST_FILE_NAME), join(config.get(Setting.BACKUP_DIRECTORY_PATH), TEST_FILE_NAME))
|
||||
# await watcher.noticed_change_signal.wait()
|
||||
# time.advance(minutes=11)
|
||||
# assert await watcher.check()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_subdirs_dont_trigger(server, watcher: Watcher, config: Config, time: FakeTime, ha: HaSource):
|
||||
await simulateBackup(config, TEST_FILE_NAME, ha, time)
|
||||
await watcher.start()
|
||||
watcher.noticed_change_signal.clear()
|
||||
os.mkdir(join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir"))
|
||||
with open(join(config.get(Setting.BACKUP_DIRECTORY_PATH), "subdir", "ignored.txt"), "w"):
|
||||
pass
|
||||
assert not await watcher.check()
|
||||
time.advance(minutes=11)
|
||||
assert not await watcher.check()
|
||||
|
||||
|
||||
async def simulateBackup(config, file_name, ha, time):
|
||||
file = join(config.get(Setting.BACKUP_DIRECTORY_PATH), file_name)
|
||||
with open(file, "w"):
|
||||
pass
|
||||
await ha.create(CreateOptions(time.now(), file_name))
|
||||
|
||||
# Verify that subdirectories get ignored
|
||||
46
hassio-google-drive-backup/tests/test_worker.py
Normal file
46
hassio-google-drive-backup/tests/test_worker.py
Normal file
@@ -0,0 +1,46 @@
|
||||
import asyncio
|
||||
|
||||
import pytest
|
||||
|
||||
from backup.worker import StopWorkException, Worker
|
||||
from .faketime import FakeTime
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker(time: FakeTime):
|
||||
data = {'count': 0}
|
||||
|
||||
async def work():
|
||||
if data['count'] >= 5:
|
||||
raise StopWorkException()
|
||||
data['count'] += 1
|
||||
|
||||
worker = Worker("test", work, time, 1)
|
||||
task = await worker.start()
|
||||
await asyncio.wait([task])
|
||||
assert not worker.isRunning()
|
||||
assert data['count'] == 5
|
||||
assert time.sleeps == [1, 1, 1, 1, 1]
|
||||
# assert worker._task.name == "test"
|
||||
assert worker.getLastError() is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_error(time: FakeTime):
|
||||
data = {'count': 0}
|
||||
|
||||
async def work():
|
||||
if data['count'] >= 5:
|
||||
raise StopWorkException()
|
||||
data['count'] += 1
|
||||
raise OSError()
|
||||
|
||||
worker = Worker("test", work, time, 1)
|
||||
task = await worker.start()
|
||||
await asyncio.wait([task])
|
||||
assert not worker.isRunning()
|
||||
assert data['count'] == 5
|
||||
assert time.sleeps == [1, 1, 1, 1, 1]
|
||||
# assert worker.getName() == "test"
|
||||
assert worker.getLastError() is not None
|
||||
assert type(worker.getLastError()) is OSError
|
||||
0
hassio-google-drive-backup/tests/util/__init__.py
Normal file
0
hassio-google-drive-backup/tests/util/__init__.py
Normal file
51
hassio-google-drive-backup/tests/util/test_token_bucket.py
Normal file
51
hassio-google-drive-backup/tests/util/test_token_bucket.py
Normal file
@@ -0,0 +1,51 @@
|
||||
from backup.util import TokenBucket
|
||||
from ..faketime import FakeTime
|
||||
|
||||
|
||||
async def test_consume(time: FakeTime):
|
||||
bucket = TokenBucket(time, 10, 1, 1)
|
||||
assert bucket.consume(1)
|
||||
assert not bucket.consume(1)
|
||||
|
||||
time.advance(seconds=1)
|
||||
assert bucket.consume(1)
|
||||
assert not bucket.consume(1)
|
||||
|
||||
|
||||
async def test_async_consume(time: FakeTime):
|
||||
bucket = TokenBucket(time, 10, 1, 1)
|
||||
assert await bucket.consumeWithWait(1, 2) == 1
|
||||
assert len(time.sleeps) == 0
|
||||
|
||||
time.advance(seconds=2)
|
||||
assert await bucket.consumeWithWait(1, 2) == 2
|
||||
assert len(time.sleeps) == 0
|
||||
|
||||
assert await bucket.consumeWithWait(1, 2) == 1
|
||||
assert len(time.sleeps) == 1
|
||||
assert time.sleeps[0] == 1
|
||||
|
||||
|
||||
async def test_capacity(time: FakeTime):
|
||||
bucket = TokenBucket(time, 10, 1)
|
||||
assert await bucket.consumeWithWait(1, 10) == 10
|
||||
assert len(time.sleeps) == 0
|
||||
|
||||
assert await bucket.consumeWithWait(5, 10) == 5
|
||||
assert len(time.sleeps) == 1
|
||||
assert time.sleeps[0] == 5
|
||||
|
||||
time.clearSleeps()
|
||||
assert await bucket.consumeWithWait(20, 20) == 20
|
||||
assert len(time.sleeps) == 1
|
||||
assert time.sleeps[0] == 20
|
||||
|
||||
time.clearSleeps()
|
||||
time.advance(seconds=5)
|
||||
assert await bucket.consumeWithWait(1, 10) == 5
|
||||
|
||||
|
||||
async def test_higher_fill_rate(time: FakeTime):
|
||||
bucket = TokenBucket(time, capacity=1000, fill_rate=100)
|
||||
assert await bucket.consumeWithWait(1, 1000) == 1000
|
||||
assert len(time.sleeps) == 0
|
||||
Reference in New Issue
Block a user