233 Commits

Author SHA1 Message Date
RustDesk
66c78c23fe Update build.yaml 2025-01-21 01:33:42 +08:00
rustdesk
e251161c5d rust 1.81 2025-01-21 01:21:13 +08:00
rustdesk
b9e5968299 1.1.13 2025-01-21 01:09:21 +08:00
21pages
7a509f6975 replace libs/hbb_common with submodule (#502)
cargo update -p schannel to fix crash on higher rust toolchain, https://github.com/seanmonstar/reqwest/issues/2311

Signed-off-by: 21pages <sunboeasy@gmail.com>
2025-01-20 17:34:22 +08:00
Integral
772db7422f refactor: replace static with const for global constants (#494) 2024-12-07 17:54:53 +08:00
XLion
2f246537df README: Restructure Container expression; add ghcr; multiple tidy up (#479)
* Update README.md

* make hbbs first everywhere

* Update README.md

* Fix link

* dockerhub to Docker Hub; Suggest user use ghcr if can't access Docker Hub

* Add `

* Add Debian 12
2024-12-02 21:14:56 +08:00
XLion
4c74586ce0 Borrow Cargo.toml's profile.release from RustDesk for better binary (#481) 2024-10-14 11:00:38 +08:00
XLion
2ac3169d77 Don't test with editing README (#480) 2024-10-12 18:14:37 +08:00
rustdesk
6f18a97644 v1.1.12 2024-10-07 16:21:36 +08:00
XLion
3b386b6b54 feat: Publish container images to GitHub ghcr.io (#473)
* ghcr

* fix name

* ghcr

* ghcr

* ghcr

* ghcr

* ghcr

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr classic

* ghcr classic

* ghcr: better naming and tidy up

* ghcr classic fix chmod

* tidy up

* tidy up

* if-no-files-found: error
2024-09-30 08:46:35 +08:00
Fionera
b37033d92c docs: the servers are by comma instead of colon (#462) 2024-09-28 23:34:47 +08:00
XLion
b7bab80bfe Bump S6 overlay and fix env warnings (#472) 2024-09-28 23:33:47 +08:00
Dominik Hassler
041a603173 add illumos support (#433) 2024-06-29 22:25:47 +08:00
rustdesk
e40994d62e remove useless KEY_FOR_API 2024-05-26 21:43:13 +08:00
rustdesk
5078a1f797 reuse port, and revert hbbr -k 2024-05-24 18:37:11 +08:00
rustdesk
a22dacce0c fix ci 2024-05-24 18:09:12 +08:00
rustdesk
064c9e4bb4 fix ci 2024-05-24 18:02:39 +08:00
rustdesk
3cf0f6560f bump to 1.1.11 2024-05-24 17:59:53 +08:00
rustdesk
c4c26dd6d7 change -k to default _ 2024-05-24 17:57:37 +08:00
r00t
4240c47244 Add Simplified Chinese Readme File. (#409) 2024-04-24 19:04:02 +08:00
writegr
6e91f41a10 chore: fix some typos in comments (#404)
Signed-off-by: writegr <wellweek@outlook.com>
2024-04-18 14:39:10 +08:00
3x3cut0r
1a7cee157c Update README.md (#389)
RELAY_SERVERS was renamed RELAY, as this variable does not exist
2024-03-15 21:30:12 +08:00
dforel
72641270f1 Update README.md (#388)
* Update README.md

增加一点可能出错的提示

* Update README.md

* Update README.md

---------

Co-authored-by: RustDesk <71636191+rustdesk@users.noreply.github.com>
2024-03-15 10:22:21 +08:00
tschettervictor
19f8d3a0f4 Add description of relay server variable for clarity (#378)
Since the "rustdesk_hbbs_ip" is not where the server should listen on, but actually the relay server to use, I added a description that will clarify this
2024-02-27 21:44:56 +08:00
XLion
bac9548f86 Add README for Traditional Chinese (#364)
* Add README-TW.md

* Add "繁體中文" hyperlink to README.md

* Add English hyperlink to README-TW.md

* Add "繁體中文" hyperlink to README-NL.md

* Add "繁體中文" hyperlink to README-DE.md
2024-02-16 10:48:19 +08:00
rustdesk
79f0eb497b trim private key 2024-01-31 11:30:42 +08:00
Paolo Asperti
94ae51458c fix Pk size check (#361)
* more descriptive error

* fix key size check
2024-01-31 11:21:00 +08:00
rustdesk
778c89efb1 bump to 1.1.10-3 2024-01-31 11:20:04 +08:00
rustdesk
a7a0fa7cb5 bump to 1.1.10-2 2024-01-30 19:23:36 +08:00
RustDesk
2d8f6ae4f4 Update changelog 2024-01-30 19:16:44 +08:00
RustDesk
324dfd6a1f fix https://github.com/rustdesk/rustdesk-server/issues/306 2024-01-30 19:02:30 +08:00
RustDesk
70242e6eb2 Update common.rs 2024-01-30 18:29:04 +08:00
RustDesk
42cdfb0885 Merge pull request #348 from paspo/pk-check
private key size check
2024-01-30 18:24:04 +08:00
paspo
cea8403dbc private key size check 2024-01-30 11:18:43 +01:00
RustDesk
0ebfc09f8b Merge pull request #333 from ledeuns/patch-1
Update mac_address
2023-12-25 08:49:52 +08:00
Denis Fondras
2e06125974 Update mac_address
The latest version of mac_address allows to compile rustdesk-server on OpenBSD
2023-12-24 18:21:51 +01:00
RustDesk
fc775102ff Delete .github/workflows/test-selfhosted.yaml 2023-12-22 20:03:30 +08:00
RustDesk
891f388040 Merge pull request #328 from paspo/slim-docker-classic
minimal docker classic images
2023-12-07 17:09:11 +08:00
paspo
8b7f3491b1 minimal docker classic images 2023-12-07 08:15:00 +01:00
rustdesk
27ac9dec56 remove docker again 2023-12-06 02:05:33 +08:00
rustdesk
acf2c6d787 try archlinux/archlinux:base-devel 2023-12-06 01:19:57 +08:00
rustdesk
5f137710be do not use docker for the runner 2023-12-06 00:33:52 +08:00
rustdesk
1a6016f08f specify image 2023-12-06 00:28:59 +08:00
rustdesk
f67e8991ef test self-hosted runner 2023-12-05 23:43:41 +08:00
rustdesk
d81010224d remove sign 2023-12-05 17:22:40 +08:00
rustdesk
4c27143125 bump 1.1.9 2023-12-05 17:09:07 +08:00
rustdesk
9461bbe8f3 remove unused 2023-12-01 11:33:16 +08:00
rustdesk
1142cf105b Fix #324 to remove unsafe 2023-12-01 11:32:07 +08:00
RustDesk
5133af1863 Merge pull request #320 from tschettervictor/patch-1
Update rustdesk-hbbs
2023-11-17 11:44:29 +08:00
tschettervictor
7c7d554609 Update rustdesk-hbbs
typo
2023-11-16 18:27:01 -07:00
RustDesk
272a094fde Delete ask-a-question.md 2023-08-08 10:18:04 +08:00
RustDesk
b33d7954be Merge pull request #292 from dinger1986/master
Update README.md
2023-08-04 16:43:06 +08:00
dinger1986
f519be8e92 Update README.md 2023-08-04 09:23:02 +01:00
RustDesk
33331be361 Merge pull request #289 from madpilot78/FreeBSD_rc_scripts_fixes
FreeBSD rc scripts fixes
2023-07-25 13:24:21 +08:00
Guido Falsi
04a9d307c5 Change chdir location to /var/db, according to hier(7).
BSD systems do not really have a /var/lib directory, although it happens to exist on live systems because some installed software creates it.
2023-07-24 17:06:23 +02:00
Guido Falsi
35c2386c98 Remove unneeded quotes. 2023-07-24 17:05:57 +02:00
Guido Falsi
4f41300450 Fix comments. 2023-07-24 17:01:34 +02:00
rustdesk
d6f99ed9a2 fix postrm 2023-07-22 22:06:51 +08:00
rustdesk
95fe0e5a78 fix https://github.com/rustdesk/rustdesk-server/issues/286 2023-07-22 21:57:03 +08:00
rustdesk
b713303c15 fix naming 2023-07-06 00:50:11 +08:00
rustdesk
d8f88e72c1 ubuntu22.04 has no i386 2023-07-06 00:16:01 +08:00
rustdesk
d3a459542d make classic also multiarch 2023-07-05 23:45:50 +08:00
rustdesk
afeebe852d github ci does not support 18.04 anymore 2023-06-29 10:41:39 +08:00
rustdesk
f1e941bf9f fix is_loopback 2023-06-16 15:13:23 +08:00
RustDesk
c871978475 Update setup.nsi 2023-06-13 10:04:47 +08:00
rustdesk
411502cd0b https://github.com/rustdesk/rustdesk-server/issues/260 2023-06-08 20:02:30 +08:00
rustdesk
243fb1fb06 more version fix 2023-06-08 19:19:02 +08:00
rustdesk
9657dcf596 release tag 2023-06-08 19:19:02 +08:00
RustDesk
946845cd01 rust 1.70 2023-06-08 16:18:00 +08:00
rustdesk
fd1c21b114 fmt 2023-06-08 14:11:37 +08:00
RustDesk
d8e3cb9e65 Merge pull request #249 from nsgundy/FixNoDirectConnectionWhenBothPeersOnLan
Fix no direct connection when both peers on LAN
2023-06-08 13:53:09 +08:00
rustdesk
3a7904fa8e fix test_hbb and bump version 1.1.8 2023-06-08 13:42:34 +08:00
nsgundy
85a20769fb Consider peers to be on same intranet if is_lan() returns true for both 2023-05-19 15:21:59 +00:00
nsgundy
aeeca0d7d1 Fix ip4 mapped ip6 addresses not considered to be part of network 2023-05-19 15:21:20 +00:00
RustDesk
482d7fb8cc Merge pull request #247 from Mr-Update/master
Create README-DE.md
2023-05-09 19:23:18 +08:00
Mr-Update
c291900e37 Update README.md 2023-04-27 23:40:31 +02:00
Mr-Update
089352420f Update README-NL.md 2023-04-27 23:40:04 +02:00
Mr-Update
1addf8c9eb Create README-DE.md 2023-04-27 23:39:30 +02:00
RustDesk
1f7d3fa05c Merge pull request #236 from bahdotsh/bahdotsh/language-improvement
improved language and corrected spelling mistakes in README.md
2023-04-03 15:37:22 +08:00
bahdotsh
c3b6e1351f improved language and corrected spelling mistakes in README.md 2023-04-03 13:04:41 +05:30
RustDesk
336b281657 Merge pull request #235 from n-connect/master
Core dump fix and rc.d scripts optimisation for FreeBSD
2023-04-01 17:31:58 +08:00
n-connect
57898642e8 Variable optimisation for hbbr rc.d service
Variable based IP definition, can be set from /etc/rc.conf
2023-04-01 11:19:43 +02:00
n-connect
fa0c006ac5 Variable optimisation for hbbs rc.d service
Variable based IP definition, can be set from /etc/rc.conf
2023-04-01 11:18:44 +02:00
n-connect
178ff59623 Fix for core dumps on FreeBSD
Flexi_logger options for async writemode https://github.com/rustdesk/rustdesk-server/pull/232#issuecomment-1491347232
2023-04-01 11:14:51 +02:00
RustDesk
7bbed69ad2 Merge pull request #232 from n-connect/master
U 20.04 for binary build & release
2023-03-30 00:43:49 +08:00
n-connect
d4b6e6ee28 U 20.04 for binary build & release
Build and GitHub release to 20.04
2023-03-29 18:40:33 +02:00
RustDesk
b3fbb9e179 Merge pull request #231 from n-connect/master
Move from Ubuntu 18.04 VM (Github Actions)
2023-03-29 22:45:38 +08:00
n-connect
b44eb1bbc3 Linux toolchain to 1.67.1
Changes  of - Ubuntu boxes rolled back to 18.04 where were applicable.

Linux toolchain from 1.62 -> 1.67.1, Windows toolchain untouched (1.62)
2023-03-29 16:43:02 +02:00
n-connect
d88286642e Move from Ubuntu 18.04
Base on https://github.com/nextcloud/notify_push/releases, the 18.04 will be deprecated. Also moving toolchain to 1.68 to hopefully fix FreeBSD build core dump issue.
2023-03-29 14:28:51 +02:00
RustDesk
74ff886900 Merge pull request #225 from n-connect/master
Adding log capability over syslog
2023-03-28 18:38:24 +08:00
n-connect
9e716b3b7b Minor update 2023-03-27 15:59:06 +02:00
n-connect
bea99ae315 Adding log capability over syslog
Logging over syslog added via a semi-duplicate line commented out by default. Instructions in the line above.
2023-03-25 22:30:59 +01:00
n-connect
6068b5941c Adding log capability over syslog
Logging over syslog added via a semi-duplicate line commented out by default. Instructions in the line above.
2023-03-25 22:29:02 +01:00
rustdesk
675bf3c1f5 fix command line buffer and test addr 2023-03-16 00:53:58 +08:00
RustDesk
dc81956d42 Merge pull request #219 from FastAct/master
Create README-NL.md
2023-03-15 19:33:42 +08:00
FastAct
ffe736be17 Create README-NL.md
Add Dutch translation
2023-03-15 12:21:01 +01:00
RustDesk
b83dae4cc4 Merge pull request #212 from n-connect/patch-1
Update build.yaml - adding FreeBSD build
2023-03-06 08:36:49 +08:00
n-connect
8d9203ecdb Update build.yaml - adding FreeBSD build 2023-03-05 19:44:56 +01:00
RustDesk
13321b5a90 Merge pull request #210 from n-connect/master
Log files enables
2023-03-03 10:32:00 +08:00
n-connect
0a8c39c11f hbbs logging to file
Logging enabled via file redirection (not syslog, as it can't tell/pass the logger program's name)
2023-03-03 01:11:26 +01:00
n-connect
7dd812c79a hbbr logging to file
Logging enabled via file redirection (not syslog, as it can't tell/pass the logger program's name)
2023-03-03 01:10:09 +01:00
RustDesk
9d524443ec Merge pull request #208 from n-connect/master
FreeBSD rcd scripts for hbbs & hbbr
2023-03-02 21:39:32 +08:00
n-connect
35a192a478 Create rustdesk-hbbs
FreeBSD rcd script running hbbs as service. Service user, group, pid, running directory handled. IP address of the -r option need to be changed manually.
2023-03-02 13:47:07 +01:00
n-connect
2f4235a968 Create rustdesk-hbbr
FreeBSD rcd script running hbbr as service. Service user, group, pid, running directory handled.
2023-03-02 13:44:13 +01:00
RustDesk
12b57238d2 Merge pull request #207 from n-connect/master
Update Cargo.toml for FreeBSD build
2023-03-02 20:31:30 +08:00
n-connect
fe805e8554 Update Cargo.toml for FreeBSD build
Crate.io package local-ip-address from v0.5+ is Freebsd compatible, eg. it compiles and works. Simply changing the version to v0.5.1 in the Cargo.toml was possible to make a successful release build on FreeBSD 13 with prepackaged Rust 1.67.1.
2023-03-02 13:19:10 +01:00
rustdesk
4d6d439b1a 1.1.7-1 2023-02-18 13:44:25 +08:00
rustdesk
ec202209f3 fix ID_EXISTS not sent out due to ipv6 change 2023-02-18 13:41:45 +08:00
RustDesk
49f10a288d Merge pull request #197 from elilchen/master
vite build
2023-02-16 23:05:27 +08:00
elilchen
986d16eb2d vite build 2023-02-16 22:28:34 +08:00
RustDesk
4fd83deaf1 Merge pull request #196 from elilchen/master
vite build
2023-02-16 22:23:16 +08:00
elilchen
85150127bb vite build 2023-02-16 22:18:27 +08:00
RustDesk
26d8c13fe4 Merge pull request #195 from elilchen/master
crt-static
2023-02-16 15:29:10 +08:00
elilchen
388ae586ec crt-static 2023-02-16 15:23:31 +08:00
RustDesk
6ad923d519 Merge pull request #194 from elilchen/master
fix issues #192
2023-02-16 14:04:48 +08:00
elilchen
fe661fe067 merge 2023-02-16 13:50:08 +08:00
elilchen
ad40d65070 issues #192 add MicrosoftEdgeWebview2Setup and fix the "VCRUNTIME140.dll Is Missing" error on windows server 2022 2023-02-16 13:39:08 +08:00
RustDesk
10bb0530ae Merge pull request #190 from elilchen/master
change icons
2023-02-14 22:52:42 +08:00
elilchen
7c3be2d9fb change icons 2023-02-14 22:48:59 +08:00
rustdesk
14301a7d5f sign all exe 2023-02-14 19:56:27 +08:00
rustdesk
d0841f7558 more lang in setup.nsi 2023-02-14 19:19:38 +08:00
rustdesk
467298efa7 fix sign 2023-02-14 19:02:46 +08:00
rustdesk
75203d2e4e sign 2023-02-14 18:20:05 +08:00
RustDesk
27d8f9cbb4 Merge pull request #188 from elilchen/master
UI
2023-02-12 09:23:42 +08:00
elilchen
7a0e300ff9 UI 2023-02-12 00:48:38 +08:00
rustdesk
b2f381913d sync 2023-02-11 00:25:44 +08:00
rustdesk
6ec46cb95f CI 2023-02-08 17:07:27 +08:00
rustdesk
e2f4962ba8 clippy 2023-02-08 16:45:30 +08:00
rustdesk
7e307a5a1c CI 2023-02-08 16:00:12 +08:00
rustdesk
33f54ba5aa sync with rustdesk 2023-02-08 15:45:51 +08:00
RustDesk
6a83ffea62 Merge pull request #187 from attie-argentum/encrypted_only
add '-k _' to hbbr if ENCRYPTED_ONLY is set
2023-02-05 12:26:12 +08:00
Attie Grande
af848f96df add '-k _' to hbbr if ENCRYPTED_ONLY is set 2023-02-03 22:40:56 +00:00
rustdesk
d88e4b5151 make hbbr / hbbs share the PORT value of .env 2023-02-01 23:31:00 +08:00
rustdesk
fe3b42809a run gen_version no matter debug or release 2023-02-01 10:49:16 +08:00
rustdesk
2830be95a7 opt 2023-01-27 11:37:43 +08:00
rustdesk
a974906fdc Merge branch 'master' into tmp 2023-01-27 11:37:15 +08:00
rustdesk
17ddc89bd0 sync rustdesk's hbb_common here 2023-01-27 11:00:59 +08:00
RustDesk
088a009078 Merge pull request #180 from paspo/deb_logdir_creation
fix logdir creation
2023-01-19 09:24:47 +08:00
Paolo Asperti
be2ce5c93b fix logdir creation 2023-01-18 18:19:12 +01:00
rustdesk
f8936eff93 change date 2023-01-11 11:28:36 +08:00
rustdesk
accd96f1d8 add 1.1.7 to debian/changelog 2023-01-11 11:20:09 +08:00
rustdesk
32ee474813 fmt 2023-01-10 23:03:44 +08:00
rustdesk
46a7c025a0 update version 2023-01-10 22:56:28 +08:00
rustdesk
3ca4035d0c wrong image name 2023-01-10 22:10:15 +08:00
rustdesk
86a75451d8 centos7 -> ubuntu18.04 2023-01-10 17:15:56 +08:00
rustdesk
8cdfe0fec6 22.04 -> 7 2023-01-10 16:44:36 +08:00
rustdesk
5aaad36729 1.1.7 2023-01-10 16:26:06 +08:00
rustdesk
fc83fa0a04 try_into_v4 2023-01-10 16:09:25 +08:00
RustDesk
338af1af9d Merge pull request #176 from botanicvelious/master
Add logging to the service files
2023-01-10 11:06:00 +08:00
botanicvelious
6538023a11 Update rustdesk-hbbr.service 2023-01-09 20:00:55 -07:00
botanicvelious
f139ad69a1 add logging to the .service file 2023-01-09 20:00:30 -07:00
rustdesk
55fcf241c6 try to_v4 in mangle encode 2023-01-09 14:52:29 +08:00
RustDesk
ebd73e1a09 remove RMEM and fix RUST_LOG 2023-01-08 11:31:13 +08:00
RustDesk
00e6a016f8 remove some env vars which normal users do not care 2023-01-08 11:29:43 +08:00
RustDesk
78d7e9437e Update test.yml 2023-01-07 12:50:24 +08:00
rustdesk
6bd5621fb0 fix ci 2023-01-07 12:38:21 +08:00
rustdesk
ee794d2e40 no gen_version if debug 2023-01-07 12:31:12 +08:00
rustdesk
605d0dd6c1 fix clippy 2023-01-07 11:59:53 +08:00
RustDesk
ebbe5d5297 Update test.yml 2023-01-07 11:51:14 +08:00
rustdesk
cd1a9885db test.yml 2023-01-07 00:58:51 +08:00
rustdesk
81d4fb6d6a fmt 2023-01-07 00:50:48 +08:00
rustdesk
a766aaf165 update .gitignore 2023-01-07 00:37:41 +08:00
rustdesk
55b841afb5 one more clippy 2023-01-07 00:37:12 +08:00
rustdesk
d48913d7b5 fix clippy 2023-01-07 00:32:10 +08:00
RustDesk
1557203912 Merge pull request #82 from dlhxzb/fix-clippy-warning
Fix: clippy warning in rust 1.62.1
2023-01-07 00:28:29 +08:00
RustDesk
0e01cfcd3a Merge branch 'master' into fix-clippy-warning 2023-01-07 00:28:18 +08:00
rustdesk
e70d82b30f ipv6 support draft 2023-01-06 20:31:15 +08:00
Bo Zhang
60a6d672c5 Fix: clippy warning in rust 1.66.0 2023-01-06 18:48:18 +09:00
RustDesk
d7b2060a5b Merge pull request #86 from dlhxzb/listern-for-unix-signal
Feat: listen for unix signal
2023-01-06 11:11:19 +08:00
RustDesk
75a40412b4 Merge branch 'master' into listern-for-unix-signal 2023-01-06 11:09:52 +08:00
rustdesk
93a89b8ea3 modify LOCAL_IP desc 2023-01-06 11:04:57 +08:00
RustDesk
8f5ce48939 Merge pull request #112 from paspo/envvars
Env vars
2023-01-06 11:02:21 +08:00
Huabing Zhou
2314783d42 sync rustdesk's hbb_common here 2023-01-06 10:40:26 +08:00
RustDesk
753c774380 Merge pull request #156 from paspo/windows-build
test windows build
2022-11-26 08:57:37 +08:00
Paolo Asperti
f626f82a94 test windows build
win build action

win build action

win build action

win build action

win build action

win build action

win build action

win build action

win build action

win build action

win build action
2022-11-26 00:11:55 +01:00
Paolo Asperti
e732599941 Merge remote-tracking branch 'origin/envvars' into envvars 2022-11-24 22:53:22 +01:00
Paolo Asperti
29b45dddb4 env variables doc 2022-11-24 22:52:56 +01:00
Paolo Asperti
650f2410ed hbbr can use ENV from docker 2022-11-24 22:52:56 +01:00
RustDesk
011b316183 Merge pull request #149 from JivinDotL/master
fix build issue 'error: non-binding let on a synchronization lock'
2022-11-20 22:59:54 +08:00
Jivin
24620c0a07 fix build issue 'error: non-binding let on a synchronization lock' 2022-11-20 09:55:17 -05:00
RustDesk
fa2b42db76 Merge pull request #122 from fufesou/peer_online_state
query_onlines: trivial refactor
2022-10-04 14:54:08 +08:00
fufesou
099aaa6b55 query_onlines: trivial refactor
Signed-off-by: fufesou <shuanglongchen@yeah.net>
2022-10-04 13:09:56 +08:00
RustDesk
85af668a4f Merge pull request #88 from paspo/doctor
rustdesk-server doctor
2022-09-20 06:02:30 +08:00
Paolo Asperti
c16101a44c env variables doc 2022-09-05 20:30:50 +02:00
Paolo Asperti
4baab96183 hbbr can use ENV from docker 2022-09-05 11:54:39 +02:00
RustDesk
74cb82c8a2 Merge pull request #111 from paspo/zip_perm
Artifacts in zip should be executables
2022-09-05 15:04:23 +07:00
Paolo Asperti
1b440b61e7 Artifacts in zip should be executables 2022-09-05 09:36:03 +02:00
RustDesk
6aa0019f8d Merge pull request #108 from miguelagve/patch-2
Update README.md for SELinux comment
2022-09-03 10:02:46 +08:00
Miguel Agueda
506b0b5364 Update README.md for SELinux comment
Added comment noting the changes required to make the containers work on a system using SELinux
2022-09-03 03:03:32 +02:00
Paolo Asperti
bf3e9471a6 proposed modifications 2022-08-09 10:07:01 +02:00
RustDesk
4bdc205fca Merge pull request #91 from paspo/doc
readme update
2022-08-08 18:17:47 +08:00
Paolo Asperti
cbadfcdfb1 readme update 2022-08-08 12:10:30 +02:00
Paolo Asperti
d878222fc1 rustdesk-server doctor 2022-08-07 00:25:18 +02:00
dlhxzb
ca2bc99a38 Feat: listen for unix signal 2022-08-04 18:02:10 +09:00
rustdesk
848b5aedb7 remove ':' from hash 2022-07-31 01:50:53 +08:00
RustDesk
9036b7b9fa Merge pull request #78 from paspo/pack-release
Zipped release binaries
2022-07-27 16:03:54 +08:00
Paolo Asperti
1c5d4c3cb2 zipped binaries 2022-07-27 09:35:19 +02:00
RustDesk
b1ad5c2e0c Merge pull request #77 from paspo/templates
Issue Templates
2022-07-27 09:08:57 +08:00
Paolo Asperti
6991b6eca0 a note about the private key 2022-07-26 23:02:07 +02:00
Paolo Asperti
f6c5088aad new issue templates 2022-07-26 22:53:09 +02:00
RustDesk
baecd45f27 Merge pull request #76 from fufesou/peer_online_state
peer_online_state: serve online state
2022-07-27 00:43:16 +08:00
fufesou
f7fc45a3d2 peer_online_state: response online state bits
Signed-off-by: fufesou <shuanglongchen@yeah.net>
2022-07-27 00:35:40 +08:00
fufesou
a4940f4634 peer_online_state: serve online state
Signed-off-by: fufesou <shuanglongchen@yeah.net>
2022-07-26 23:03:30 +08:00
Paolo Asperti
545ae2fd93 Update issue templates 2022-07-25 23:00:38 +02:00
RustDesk
8c477c8cd0 Merge pull request #72 from paspo/debian
debian support
2022-07-25 23:05:17 +08:00
Paolo Asperti
ccc870de77 debian support 2022-07-25 14:36:44 +02:00
Paolo Asperti
70e6cf13ec updated README 2022-07-22 10:29:10 +02:00
RustDesk
dbab22cbbc Merge pull request #70 from paspo/docker_verify_keypair
keypair verification before container startup
2022-07-22 16:23:29 +08:00
Paolo Asperti
fab70ce8e7 keypair verification before container startup 2022-07-22 10:18:50 +02:00
RustDesk
d11607fb6c Merge pull request #69 from paspo/fix-docker-manifest
fix github action on manual build
2022-07-22 10:39:37 +08:00
rustdesk
51d8cd80c1 protbuf 3.1 with_bytes 2022-07-22 00:28:10 +08:00
Paolo Asperti
269f2fe0eb fix github action on manual build 2022-07-21 17:27:02 +02:00
RustDesk
2d385d88d3 Merge pull request #68 from KreativeKrise/patch-1
Use same volumes for hbbs and hbbr
2022-07-21 22:47:37 +08:00
KreativeKrise
595aeb6d50 Use same volumes for hbbs and hbbr
The services hbbs and hbbr must use the same volume. Otherwise, different keys are used when encryption is enabled.
2022-07-21 16:45:06 +02:00
RustDesk
06bd1117f6 Merge pull request #67 from paspo/deb_package2
update deb packaging
2022-07-21 19:12:00 +08:00
Paolo Asperti
ee04df9779 added .deb section in readme 2022-07-21 12:42:34 +02:00
Paolo Asperti
a98d322685 deb package build for rustdesk-utils 2022-07-21 12:05:52 +02:00
RustDesk
670fb87ee1 Merge pull request #64 from paspo/deb_package
.deb packaging
2022-07-21 17:12:23 +08:00
RustDesk
50d7975ad8 Merge pull request #61 from paspo/secrets
managing encryption keys via docker secrets
2022-07-21 17:11:42 +08:00
RustDesk
003d9f1324 Merge pull request #62 from paspo/rustdesk-utils
small util for key management
2022-07-21 17:11:15 +08:00
Paolo Asperti
26549d7e7e better error management 2022-07-21 11:07:16 +02:00
Paolo Asperti
913de8515e just a pass of 'cargo fmt' 2022-07-15 21:39:19 +02:00
Paolo Asperti
3340322355 deb packaging 2022-07-15 18:40:54 +02:00
Paolo Asperti
06409279f4 rustdesk-utils 2022-07-14 15:59:39 +02:00
Paolo Asperti
0862bc8c04 test secrets 2022-07-14 15:14:53 +02:00
rustdesk
acaee5b7a4 https://github.com/rustdesk/rustdesk-server/issues/24 2022-07-14 18:39:42 +08:00
Paolo Asperti
bfcfa68eae readme lint 2022-07-13 18:00:27 +02:00
rustdesk
39153ce147 fix slow connection, '/' in pub key, and hbbr wait for key, and possible
solution for https://github.com/rustdesk/rustdesk-server/issues/24
2022-07-13 00:22:45 +08:00
RustDesk
57cbac7079 Merge pull request #53 from paspo/fix_service_start
Fix service start
2022-07-01 22:13:41 +08:00
Paolo Asperti
8fafadd1cb added 2 sec wait 2022-07-01 16:07:01 +02:00
Paolo Asperti
98778beed3 added service dep 2022-07-01 16:06:47 +02:00
RustDesk
2d5429640c Update README.md 2022-07-01 21:40:21 +08:00
RustDesk
eaf57b4a40 Update README.md 2022-07-01 21:37:35 +08:00
115 changed files with 10833 additions and 4400 deletions

8
.cargo/config.toml Normal file
View File

@@ -0,0 +1,8 @@
[target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]
[target.i686-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]
[target.'cfg(target_os="macos")']
rustflags = [
"-C", "link-args=-sectcreate __CGPreLoginApp __cgpreloginapp /dev/null",
]

35
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,35 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Describe the environment**
- Install environment: docker, docker swarm, podman, kubernetes, or package
- If available, the `docker-compose.yaml` file
- If package, we need the distribution and release: Ubuntu 22.04, Debian 11, ...
- Or if you're running the plain binary, how you're running it
- In any case, you have to specify the version in use
**How to Reproduce the bug**
Steps to reproduce the behavior:
1. Given the previously described environment
2. Do this and that
3. I get this error
**Expected behavior**
This should happen instead.
**Additional context**
Add any other context about the problem here.
**Notes**
- Please write in english only. If you provide some images in different languages, you're required to write a translation in english.
- In any case, **NEVER** put here the content if your `id_ed25519` file

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

View File

@@ -0,0 +1,25 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: 'enhancement'
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
**Notes**
- Please write in english only. If you provide some images in different languages, you're required to write a translation in english.
- In any case, **NEVER** put here the content if your `id_ed25519` file

View File

@@ -26,7 +26,7 @@ jobs:
build:
name: Build - ${{ matrix.job.name }}
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
@@ -35,18 +35,23 @@ jobs:
- { name: "arm64v8", target: "aarch64-unknown-linux-musl" }
- { name: "armv7", target: "armv7-unknown-linux-musleabihf" }
- { name: "i386", target: "i686-unknown-linux-musl" }
#- { name: "amd64fb", target: "x86_64-unknown-freebsd" }
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: nightly
toolchain: "1.81"
override: true
default: true
components: rustfmt
profile: minimal
target: ${{ matrix.job.target }}
- name: Build
@@ -56,63 +61,151 @@ jobs:
args: --release --all-features --target=${{ matrix.job.target }}
use-cross: true
# - name: Run tests
# run: cargo test --verbose
- name: Exec chmod
run: chmod -v a+x target/${{ matrix.job.target }}/release/*
- name: Publish Artifacts
uses: actions/upload-artifact@v3
with:
name: binaries-${{ matrix.job.name }}
name: binaries-linux-${{ matrix.job.name }}
path: |
target/${{ matrix.job.target }}/release/hbbr
target/${{ matrix.job.target }}/release/hbbs
target/${{ matrix.job.target }}/release/rustdesk-utils
if-no-files-found: error
build-win:
name: Build - windows
runs-on: windows-2019
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: "1.81"
override: true
default: true
components: rustfmt
profile: minimal
target: x86_64-pc-windows-msvc
- name: Build
uses: actions-rs/cargo@v1
with:
command: build
args: --release --all-features --target=x86_64-pc-windows-msvc
use-cross: true
- name: Install NSIS
run: |
iwr -useb get.scoop.sh -outfile 'install.ps1'
.\install.ps1 -RunAsAdmin
scoop update
scoop bucket add extras
scoop install nsis
- name: Install Node.js
uses: actions/setup-node@v3
with:
node-version: 16
- name: Sign exe files
uses: GermanBluefox/code-sign-action@v7
if: false
with:
certificate: '${{ secrets.WINDOWS_PFX_BASE64 }}'
password: '${{ secrets.WINDOWS_PFX_PASSWORD }}'
certificatesha1: '${{ secrets.WINDOWS_PFX_SHA1_THUMBPRINT }}'
folder: 'target\x86_64-pc-windows-msvc\release'
recursive: false
- name: Build UI browser file
run: |
npm i
npm run build
working-directory: ./ui/html
- name: Build UI setup file
run: |
rustup default nightly
cargo build --release
xcopy /y ..\target\x86_64-pc-windows-msvc\release\*.exe setup\bin\
xcopy /y target\release\*.exe setup\
mkdir setup\logs
makensis /V1 setup.nsi
mkdir SignOutput
mv RustDeskServer.Setup.exe SignOutput\
mv ..\target\x86_64-pc-windows-msvc\release\*.exe SignOutput\
working-directory: ./ui
- name: Sign UI setup file
uses: GermanBluefox/code-sign-action@v7
if: false
with:
certificate: '${{ secrets.WINDOWS_PFX_BASE64 }}'
password: '${{ secrets.WINDOWS_PFX_PASSWORD }}'
certificatesha1: '${{ secrets.WINDOWS_PFX_SHA1_THUMBPRINT }}'
folder: './ui/SignOutput'
recursive: false
- name: Publish Artifacts
uses: actions/upload-artifact@v3
with:
name: binaries-windows-x86_64
path: |
ui\SignOutput\hbbr.exe
ui\SignOutput\hbbs.exe
ui\SignOutput\rustdesk-utils.exe
ui\SignOutput\RustDeskServer.Setup.exe
if-no-files-found: error
# github (draft) release with all binaries
release:
name: Github release
needs: build
runs-on: ubuntu-22.04
needs:
- build
- build-win
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
job:
- { os: "linux", name: "amd64", suffix: "" }
- { os: "linux", name: "arm64v8", suffix: "" }
- { os: "linux", name: "armv7", suffix: "" }
- { os: "linux", name: "i386", suffix: "" }
#- { os: "linux", name: "amd64fb", suffix: "" }
- { os: "windows", name: "x86_64", suffix: "-unsigned" }
steps:
- name: Download binaries (amd64)
- name: Download binaries (${{ matrix.job.os }} - ${{ matrix.job.name }})
uses: actions/download-artifact@v3
with:
name: binaries-amd64
path: amd64
name: binaries-${{ matrix.job.os }}-${{ matrix.job.name }}
path: ${{ matrix.job.name }}
- name: Download binaries (arm64v8)
uses: actions/download-artifact@v3
with:
name: binaries-arm64v8
path: arm64v8
- name: Exec chmod
run: chmod -v a+x ${{ matrix.job.name }}/*
- name: Download binaries (armv7)
uses: actions/download-artifact@v3
with:
name: binaries-armv7
path: armv7
- name: Pack files (${{ matrix.job.os }} - ${{ matrix.job.name }})
run: |
sudo apt update
DEBIAN_FRONTEND=noninteractive sudo apt install -y zip
zip ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}${{ matrix.job.suffix }}.zip ${{ matrix.job.name }}/*
- name: Download binaries (i386)
uses: actions/download-artifact@v3
with:
name: binaries-i386
path: i386
- name: Rename files
run: for arch in amd64 arm64v8 armv7 i386 ; do for b in hbbr hbbs ; do mv -v ${arch}/${b} ${arch}/${b}-${arch} ; done ; done
- name: Create Release
- name: Create Release (${{ matrix.job.os }} - (${{ matrix.job.name }})
uses: softprops/action-gh-release@v1
with:
draft: true
files: |
amd64/*
arm64v8/*
armv7/*
i386/*
files: ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}${{ matrix.job.suffix }}.zip
# docker build and push of single-arch images
docker:
@@ -133,11 +226,13 @@ jobs:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v3
with:
name: binaries-${{ matrix.job.name }}
name: binaries-linux-${{ matrix.job.name }}
path: docker/rootfs/usr/bin
- name: Make binaries executable
@@ -171,11 +266,12 @@ jobs:
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v3
uses: docker/build-push-action@v5
with:
context: "./docker"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
build-args: |
S6_ARCH=${{ matrix.job.s6_platform }}
tags: |
@@ -209,8 +305,10 @@ jobs:
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
# manifest for :1.2.3 tag
# this has to run only if invoked by a new tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@master
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}-amd64,${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}-armv7,${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}-i386
@@ -232,34 +330,36 @@ jobs:
extra-images: ${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-amd64,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-armv7,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-i386
push: true
# docker build and push of classic images
# docker build and push of single-arch images
docker-classic:
name: Docker push classic - ${{ matrix.job.name }}
name: Docker push - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", docker_platform: "linux/amd64", tag: "latest" }
- { name: "arm64v8", docker_platform: "linux/arm64", tag: "latest-arm64v8" }
- { name: "amd64", docker_platform: "linux/amd64" }
- { name: "arm64v8", docker_platform: "linux/arm64" }
- { name: "armv7", docker_platform: "linux/arm/v7" }
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v3
with:
name: binaries-${{ matrix.job.name }}
name: binaries-linux-${{ matrix.job.name }}
path: docker-classic/
- name: Make binaries executable
run: chmod -v a+x docker-classic/hbb*
run: chmod -v a+x docker-classic/*
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
@@ -280,12 +380,127 @@ jobs:
with:
images: registry.hub.docker.com/${{ secrets.DOCKER_IMAGE_CLASSIC }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v3
uses: docker/build-push-action@v5
with:
context: "./docker-classic"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
tags: |
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ matrix.job.tag }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
labels: ${{ steps.meta.outputs.labels }}
# docker build and push of multiarch images
docker-manifest-classic:
name: Docker manifest
needs: docker
runs-on: ubuntu-22.04
steps:
- name: Log in to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
# manifest for :1.2.3 tag
# this has to run only if invoked by a new tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@master
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-armv7
push: true
# manifest for :1 tag (major release)
- name: Create and push manifest (:major)
uses: Noelware/docker-manifest-action@master
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-armv7
push: true
# manifest for :latest tag
- name: Create and push manifest (:latest)
uses: Noelware/docker-manifest-action@master
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-armv7
push: true
deb-package:
name: debian package - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", debian_platform: "amd64", crossbuild_package: "" }
- { name: "arm64v8", debian_platform: "arm64", crossbuild_package: "crossbuild-essential-arm64" }
- { name: "armv7", debian_platform: "armhf", crossbuild_package: "crossbuild-essential-armhf" }
- { name: "i386", debian_platform: "i386", crossbuild_package: "crossbuild-essential-i386" }
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Create packaging env
run: |
sudo apt update
DEBIAN_FRONTEND=noninteractive sudo apt install -y devscripts build-essential debhelper pkg-config ${{ matrix.job.crossbuild_package }}
mkdir -p debian-build/${{ matrix.job.name }}/bin
- name: Download binaries
uses: actions/download-artifact@v3
with:
name: binaries-linux-${{ matrix.job.name }}
path: debian-build/${{ matrix.job.name }}/bin
- name: Build package for ${{ matrix.job.name }} arch
run: |
chmod -v a+x debian-build/${{ matrix.job.name }}/bin/*
cp -vr debian systemd debian-build/${{ matrix.job.name }}/
cat debian/control.tpl | sed 's/{{ ARCH }}/${{ matrix.job.debian_platform }}/' > debian-build/${{ matrix.job.name }}/debian/control
cd debian-build/${{ matrix.job.name }}/
debuild -i -us -uc -b -a${{ matrix.job.debian_platform }}
- name: Create Release
uses: softprops/action-gh-release@v1
with:
draft: true
files: |
debian-build/rustdesk-server-hbbr_*_${{ matrix.job.debian_platform }}.deb
debian-build/rustdesk-server-hbbs_*_${{ matrix.job.debian_platform }}.deb
debian-build/rustdesk-server-utils_*_${{ matrix.job.debian_platform }}.deb

335
.github/workflows/ghcr.yml vendored Normal file
View File

@@ -0,0 +1,335 @@
name: Build and publish to ghcr.io
on:
workflow_dispatch:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- '[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-[0-9]+'
- '[0-9]+.[0-9]+.[0-9]+-[0-9]+'
env:
CARGO_TERM_COLOR: always
LATEST_TAG: latest
permissions:
contents: read
packages: write # So need to set "secrets.GITHUB_TOKEN"
jobs:
# Binary build
build:
name: Build - ${{ matrix.job.name }}
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", target: "x86_64-unknown-linux-musl" }
- { name: "arm64v8", target: "aarch64-unknown-linux-musl" }
- { name: "armv7", target: "armv7-unknown-linux-musleabihf" }
- { name: "i386", target: "i686-unknown-linux-musl" }
#- { name: "amd64fb", target: "x86_64-unknown-freebsd" }
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@v1
with:
toolchain: "1.70.0"
targets: ${{ matrix.job.target }}
components: "rustfmt"
- uses: Swatinem/rust-cache@v2
with:
prefix-key: ${{ matrix.job.os }}
- name: Build
uses: actions-rs/cargo@v1
with:
command: build
args: --release --all-features --target=${{ matrix.job.target }}
use-cross: true
- name: Exec chmod
run: chmod -v a+x target/${{ matrix.job.target }}/release/*
- name: Publish Artifacts
uses: actions/upload-artifact@v4
with:
name: binaries-linux-${{ matrix.job.name }}
path: |
target/${{ matrix.job.target }}/release/hbbr
target/${{ matrix.job.target }}/release/hbbs
target/${{ matrix.job.target }}/release/rustdesk-utils
if-no-files-found: error
# Build and push single-arch Docker images to ghcr.io
create-s6-overlay-images:
name: Docker push - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", docker_platform: "linux/amd64", s6_platform: "x86_64" }
- { name: "arm64v8", docker_platform: "linux/arm64", s6_platform: "aarch64" }
- { name: "armv7", docker_platform: "linux/arm/v7", s6_platform: "armhf" }
- { name: "i386", docker_platform: "linux/386", s6_platform: "i686" }
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v4
with:
pattern: binaries-linux-${{ matrix.job.name }}
path: docker/rootfs/usr/bin
merge-multiple: true
- name: Make binaries executable
run: chmod -v a+x docker/rootfs/usr/bin/*
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}-s6
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: "./docker"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
build-args: |
S6_ARCH=${{ matrix.job.s6_platform }}
tags: |
ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}-${{ matrix.job.name }}
ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
labels: ${{ steps.meta.outputs.labels }}
# Set up minifest and tag for pushed image
create-s6-overlay-images-manifest:
name: Manifest for s6-overlay images
needs: create-s6-overlay-images
runs-on: ubuntu-24.04
steps:
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
# Create and push manifest for :ve.rs.ion tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@master
if: github.event_name != 'workflow_dispatch'
with:
base-image: ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}-amd64,
ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}-armv7,
ghcr.io/${{ github.repository }}-s6:${{ env.GIT_TAG }}-i386
push: true
# Create and push manifest for :major tag
- name: Create and push manifest (:major)
uses: Noelware/docker-manifest-action@master
with:
base-image: ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}-amd64,
ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}-armv7,
ghcr.io/${{ github.repository }}-s6:${{ env.MAJOR_TAG }}-i386
push: true
# Create and push manifest for :latest tag
- name: Create and push manifest (:latest)
uses: Noelware/docker-manifest-action@master
with:
base-image: ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}-amd64,
ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}-armv7,
ghcr.io/${{ github.repository }}-s6:${{ env.LATEST_TAG }}-i386
push: true
# Build and push single-arch Docker images to ghcr.io
create-classic-images:
name: Docker push - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", docker_platform: "linux/amd64" }
- { name: "arm64v8", docker_platform: "linux/arm64" }
- { name: "armv7", docker_platform: "linux/arm/v7" }
- { name: "i386", docker_platform: "linux/386" }
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v4
with:
pattern: binaries-linux-${{ matrix.job.name }}
path: docker-classic
merge-multiple: true
- name: Make binaries executable
run: chmod -v a+x docker-classic/*
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: "./docker-classic"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
tags: |
ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
labels: ${{ steps.meta.outputs.labels }}
# Set up minifest and tag for pushed image
create-classic-images-manifest:
name: Manifest for classic images
needs: create-classic-images
runs-on: ubuntu-24.04
steps:
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
# Create and push manifest for :ve.rs.ion tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@master
if: github.event_name != 'workflow_dispatch'
with:
base-image: ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}-amd64,
ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}-armv7,
ghcr.io/${{ github.repository }}:${{ env.GIT_TAG }}-i386
push: true
# Create and push manifest for :major tag
- name: Create and push manifest (:major)
uses: Noelware/docker-manifest-action@master
with:
base-image: ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}-amd64,
ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}-armv7,
ghcr.io/${{ github.repository }}:${{ env.MAJOR_TAG }}-i386
push: true
# Create and push manifest for :latest tag
- name: Create and push manifest (:latest)
uses: Noelware/docker-manifest-action@master
with:
base-image: ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}
extra-images: |
ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}-amd64,
ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}-arm64v8,
ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}-armv7,
ghcr.io/${{ github.repository }}:${{ env.LATEST_TAG }}-i386
push: true

80
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: test
on:
push:
branches: [ "master" ]
paths-ignore:
- '**/README.md'
pull_request:
branches: [ "master" ]
paths-ignore:
- '**/README.md'
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
submodules: recursive
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: check
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
submodules: recursive
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: test
args: --all
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt
submodules: recursive
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: build
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: clippy
submodules: recursive
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: clippy
args: --all -- -D warnings

9
.gitignore vendored
View File

@@ -1,3 +1,12 @@
target
id*
db*
debian-build
debian/.debhelper
debian/debhelper-build-stamp
.DS_Store
.vscode
src/version.rs
db_v2.sqlite3
test.*
.idea

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "libs/hbb_common"]
path = libs/hbb_common
url = https://github.com/rustdesk/hbb_common

6
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,6 @@
{
"rust.checkWith": "clippy",
"rust.formatOnSave": true,
"rust.checkOnSave": true,
"rust.useNewErrorFormat": true
}

1991
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
[package]
name = "hbbs"
version = "1.1.5"
authors = ["open-trade <info@rustdesk.com>"]
version = "1.1.13"
authors = ["rustdesk <info@rustdesk.com>"]
edition = "2021"
build = "build.rs"
default-run = "hbbs"
@@ -10,6 +10,10 @@ default-run = "hbbs"
name = "hbbr"
path = "src/hbbr.rs"
[[bin]]
name = "rustdesk-utils"
path = "src/utils.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
@@ -22,16 +26,16 @@ clap = "2"
rust-ini = "0.18"
minreq = { version = "2.4", features = ["punycode"] }
machine-uid = "0.2"
mac_address = "1.1"
mac_address = "1.1.5"
whoami = "1.2"
base64 = "0.13"
axum = { version = "0.5", features = ["headers"] }
sqlx = { git = "https://github.com/open-trade/sqlx", features = [ "runtime-tokio-rustls", "sqlite", "macros", "chrono", "json" ] }
sqlx = { version = "0.6", features = [ "runtime-tokio-rustls", "sqlite", "macros", "chrono", "json" ] }
deadpool = "0.8"
async-trait = "0.1"
async-speed-limit = { git = "https://github.com/open-trade/async-speed-limit" }
uuid = { version = "0.8", features = ["v4"] }
bcrypt = "0.12"
uuid = { version = "1.0", features = ["v4"] }
bcrypt = "0.13"
chrono = "0.4"
jsonwebtoken = "8"
headers = "0.3"
@@ -40,12 +44,34 @@ sodiumoxide = "0.2"
tokio-tungstenite = "0.17"
tungstenite = "0.17"
regex = "1.4"
tower-http = { version = "0.2", features = ["fs", "trace", "cors"] }
tower-http = { version = "0.3", features = ["fs", "trace", "cors"] }
http = "0.2"
flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset"] }
flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset", "dont_minimize_extra_stacks"] }
ipnetwork = "0.20"
local-ip-address = "0.5.1"
dns-lookup = "1.0.8"
ping = "0.4.0"
[target.'cfg(any(target_os = "macos", target_os = "windows"))'.dependencies]
# https://github.com/rustdesk/rustdesk-server-pro/issues/189, using native-tls for better tls support
reqwest = { git = "https://github.com/rustdesk-org/reqwest", features = ["blocking", "socks", "json", "native-tls", "gzip"], default-features=false }
[target.'cfg(not(any(target_os = "macos", target_os = "windows")))'.dependencies]
reqwest = { git = "https://github.com/rustdesk-org/reqwest", features = ["blocking", "socks", "json", "rustls-tls", "rustls-tls-native-roots", "gzip"], default-features=false }
[build-dependencies]
hbb_common = { path = "libs/hbb_common" }
[workspace]
members = ["libs/hbb_common"]
exclude = ["ui"]
#https://github.com/johnthagen/min-sized-rust
#https://doc.rust-lang.org/cargo/reference/profiles.html#default-profiles
[profile.release]
lto = true
codegen-units = 1
panic = 'abort'
strip = true
#opt-level = 'z' # only have smaller size after strip # Default is 3, better performance
#rpath = true # Not needed

345
README-DE.md Normal file
View File

@@ -0,0 +1,345 @@
<p align="center">
<a href="#manuelles-erstellen">Erstellen</a> •
<a href="#docker-image">Docker</a> •
<a href="#s6-overlay-basierte-images">S6-Overlay</a> •
<a href="#ein-schlüsselpaar-erstellen">Schlüsselpaar</a> •
<a href="#debian-pakete">Debian-Pakete</a> •
<a href="#umgebungsvariablen">Umgebungsvariablen</a><br>
[<a href="README.md">English</a>] | [<a href="README-NL.md">Nederlands</a>] | [<a href="README-TW.md">繁體中文</a>] | [<a href="README-ZH.md">简体中文</a>]<br>
</p>
# RustDesk Server-Programm
[![build](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml/badge.svg)](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml)
[**Herunterladen**](https://github.com/rustdesk/rustdesk-server/releases)
[**Handbuch**](https://rustdesk.com/docs/de/self-host/)
[**FAQ**](https://github.com/rustdesk/rustdesk/wiki/FAQ)
Hosten Sie Ihren eigenen RustDesk-Server selbst, er ist kostenlos und quelloffen.
## Manuelles Erstellen
```bash
cargo build --release
```
In target/release werden drei ausführbare Dateien erzeugt.
- hbbs - RustDesk ID/Rendezvous-Server
- hbbr - RustDesk Relay-Server
- rustdesk-utils - RustDesk CLI-Utilities
[Hier](https://github.com/rustdesk/rustdesk-server/releases) finden Sie aktualisierte Binärdateien.
Wenn Sie Ihren eigenen Server entwickeln wollen, könnte [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) ein besserer und einfacherer Start für Sie sein als dieses Repository.
## Docker-Image
Docker-Images werden automatisch generiert und bei jedem Github-Release veröffentlicht. Wir haben 2 Arten von Images.
### Klassisches Image
Diese Images sind mit `Ubuntu 20.04` gebaut, mit dem Zusatz der wichtigen Binärdateien (`hbbr` und `hbbs`). Sie sind auf [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) mit diesen Tags verfügbar:
| Architektur | Image:Tag |
| --- | --- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
Sie können diese Images direkt mit `docker run` mit diesen Befehlen starten:
```bash
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
Oder ohne `--net=host`, aber die P2P-Direktverbindung kann dann nicht funktionieren.
Bei Systemen, die SELinux verwenden, muss `/root` durch `/root:z` ersetzt werden, damit die Container korrekt laufen. Alternativ kann die SELinux-Containertrennung durch Hinzufügen der Option `--security-opt label=disable` vollständig deaktiviert werden.
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
Der Parameter `relay-server-ip` ist die IP-Adresse (oder der DNS-Name) des Servers, auf dem diese Container laufen. Der **optionale** Parameter `port` muss verwendet werden, wenn Sie einen anderen Port als **21117** für `hbbr` verwenden.
Sie können auch Docker Compose verwenden, wobei diese Konfiguration als Vorlage dient:
```yaml
version: '3'
networks:
rustdesk-net:
external: false
services:
hbbs:
container_name: hbbs
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21118:21118
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
ports:
- 21117:21117
- 21119:21119
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
Bearbeiten Sie Zeile 16 so, dass sie auf Ihren Relay-Server verweist (den, der am Port 21117 lauscht). Sie können auch die Zeilen für die Volumes (Zeile 18 und 33) bearbeiten, wenn Sie dies wünschen.
(Die Anerkennung für Docker Compose geht an @lukebarone und @QuiGonLeong.)
## S6-Overlay-basierte Images
Diese Images sind mit `busybox:stable` gebaut, mit dem Zusatz Binärdateien (sowohl hbbr als auch hbbs) und [S6-overlay](https://github.com/just-containers/s6-overlay). Sie sind auf [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) mit diesen Tags verfügbar:
| Architektur | Version | Image:Tag |
| --- | --- | --- |
| multiarch | neueste | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | neueste | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | neueste | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | neueste | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | neueste | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
Es wird dringend empfohlen, das Image `multiarch` entweder mit dem Tag `major version` oder `latest` zu verwenden.
Das S6-Overlay fungiert als Supervisor und hält beide Prozesse am Laufen, sodass bei diesem Image keine zwei separaten Container benötigt werden.
Sie können diese Images direkt mit `docker run` mit diesem Befehl starten:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
oder ohne `--net=host`, aber die P2P-Direktverbindung kann dann nicht funktionieren.
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
-p 21117:21117 -p 21118:21118 -p 21119:21119 \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
Oder Sie können eine Docker Compose-Datei verwenden:
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
volumes:
- ./data:/data
restart: unless-stopped
```
Für dieses Container-Image können Sie diese Umgebungsvariablen verwenden, **zusätzlich** zu den im Abschnitt **Umgebungsvariablen** angegebenen Variablen:
| Variable | optional | Beschreibung |
| --- | --- | --- |
| RELAY | nein | IP-Adresse/DNS-Name des Rechners, auf dem dieser Container läuft |
| ENCRYPTED_ONLY | ja | Wenn auf **1** gesetzt, wird eine unverschlüsselte Verbindung nicht akzeptiert |
| KEY_PUB | ja | Öffentlicher Teil des Schlüsselpaares |
| KEY_PRIV | ja | Privater Teil des Schlüsselpaares |
### Verwaltung von Geheimnissen in S6-Overlay-basierten Images
Sie können das Schlüsselpaar natürlich in einem Docker-Volume aufbewahren, aber empfehlenswert ist, die Schlüssel nicht in das Dateisystem zu schreiben.
Beim Start des Containers wird das Vorhandensein des Schlüsselpaares geprüft (`/data/id_ed25519.pub` und `/data/id_ed25519`). Wenn einer dieser Schlüssel nicht existiert, wird er aus den Umgebungsvariablen oder den Docker-Geheimnissen neu erstellt.
Dann wird die Gültigkeit des Schlüsselpaares überprüft: Wenn öffentlicher und privater Schlüssel nicht übereinstimmen, wird der Container angehalten.
Wenn Sie keine Schlüssel angeben, erzeugt `hbbs` einen für Sie und legt ihn am Standardspeicherort ab.
#### Umgebungsvariablen zum Speichern des Schlüsselpaars verwenden
Sie können Docker-Umgebungsvariablen verwenden, um die Schlüssel zu speichern. Folgen Sie einfach diesen Beispielen:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### Docker-Geheimnisse zum Speichern des Schlüsselpaars verwenden
Sie können alternativ auch Docker-Geheimnisse verwenden, um die Schlüssel zu speichern.
Dies ist nützlich, wenn Sie **Docker Compose** oder **Docker Swarm** verwenden.
Folgen Sie einfach diesem Beispiel:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## Ein Schlüsselpaar erstellen
Für die Verschlüsselung wird ein Schlüsselpaar benötigt, das Sie bereitstellen können, aber Sie benötigen eine Möglichkeit, es zu erstellen.
Mit diesem Befehl können Sie ein Schlüsselpaar erzeugen:
```bash
/usr/bin/rustdesk-utils genkeypair
```
Wenn Sie das Paket `rustdesk-utils` nicht auf Ihrem System installiert haben (oder dies nicht wollen), können Sie den gleichen Befehl mit Docker aufrufen:
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
Die Ausgabe sieht dann etwa so aus:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## Debian-Pakete
Für jede Binärdatei stehen separate Debian-Pakete zur Verfügung, die Sie in [Releases](https://github.com/rustdesk/rustdesk-server/releases) finden können.
Diese Pakete sind für die folgenden Distributionen gedacht:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 11 Bullseye
- Debian 10 Buster
## Umgebungsvariablen
hbbs und hbbr können mit diesen Umgebungsvariablen konfiguriert werden.
Sie können die Variablen wie üblich angeben oder eine `.env`-Datei verwenden.
| Variable | Binärdatei | Beschreibung |
| --- | --- | --- |
| ALWAYS_USE_RELAY | hbbs | Wenn auf **Y** gesetzt, wird eine direkte Verbindung nicht zugelassen. |
| DB_URL | hbbs | Pfad für die Datenbankdatei |
| DOWNGRADE_START_CHECK | hbbr | Verzögerung (in Sekunden) vor der Downgrade-Prüfung |
| DOWNGRADE_THRESHOLD | hbbr | Schwellenwert der Downgrade-Prüfung (Bit/ms)) |
| KEY | hbbs/hbbr | Wenn gesetzt, wird die Verwendung eines bestimmten Schlüssels erzwungen. Wenn auf **_** gesetzt, wird die Verwendung eines beliebigen Schlüssels erzwungen. |
| LIMIT_SPEED | hbbr | Höchstgeschwindigkeit (in Mb/s) |
| PORT | hbbs/hbbr | Lauschender Port (21116 für hbbs - 21117 für hbbr) |
| RELAY_SERVERS | hbbs | IP-Adresse/DNS-Name der Rechner, auf denen hbbr läuft (durch Komma getrennt) |
| RUST_LOG | all | Debug-Level einstellen (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | Maximale Bandbreite für eine einzelne Verbindung (in Mb/s) |
| TOTAL_BANDWIDTH | hbbr | Maximale Gesamtbandbreite (in Mb/s) |

345
README-NL.md Normal file
View File

@@ -0,0 +1,345 @@
<p align="center">
<a href="#hoe-handmatig-opbouwen">Opbouwen</a> •
<a href="#docker-bestanden-images">Docker</a> •
<a href="#s6-overlay-gebaseerde-bestanden">S6-Overlay</a> •
<a href="#hoe-maak-je-een-key-paar">Key paar</a> •
<a href="#deb-pakketten">Debian pakketten</a> •
<a href="#env-variabelen">ENV variabelen</a><br>
[<a href="README.md">English</a>] | [<a href="README-DE.md">Deutsch</a>] | [<a href="README-TW.md">繁體中文</a>] | [<a href="README-ZH.md">简体中文</a>]<br>
</p>
# RustDesk Server Programa
[![build](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml/badge.svg)](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml)
[**Download**](https://github.com/rustdesk/rustdesk-server/releases)
[**Handleiding**](https://rustdesk.com/docs/nl/self-host/)
[**FAQ**](https://github.com/rustdesk/rustdesk/wiki/FAQ)
Zelf uw eigen RustDesk server hosten, het is gratis en open source.
## Hoe handmatig opbouwen
```bash
cargo build --release
```
In target/release worden drie uitvoerbare bestanden gegenereerd.
- hbbs - RustDesk ID/Rendezvous server
- hbbr - RustDesk relay server
- rustdesk-utils - RustDesk CLI hulpprogramma's
U kunt bijgewerkte binaries vinden op [releases](https://github.com/rustdesk/rustdesk-server/releases) pagina.
Als u uw eigen server wilt ontwikkelen, is [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) misschien een betere en eenvoudigere start voor u dan deze repo.
## Docker bestanden (images)
Docker bestanden (images) worden automatisch gegenereerd en gepubliceerd bij elke github release. We hebben 2 soorten bestanden (images).
### Klassiek bestand (image)
Deze bestanden (images) zijn gebouwd voor `ubuntu-20.04` met als enige toevoeging de belangrijkste binaries (`hbbr` en `hbbs`). Ze zijn beschikbaar op [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) met deze tags:
| architectuur | image:tag |
| --- | --- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
U kunt deze bestanden (images) direct starten via `docker run` met deze commando's:
```bash
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
of zonder `--net=host`, maar een directe P2P verbinding zal niet werken.
Voor systemen die SELinux gebruiken is het vervangen van `/root` door `/root:z` nodig om de containers correct te laten draaien. Als alternatief kan SELinux containerscheiding volledig worden uitgeschakeld door de optie `--security-opt label=disable` toe te voegen.
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
De `relay-server-ip` parameter is het IP adres (of dns naam) van de server waarop deze containers draaien. De **optionele** `port` parameter moet gebruikt worden als je een andere poort dan **21117** gebruikt voor `hbbr`.
U kunt ook docker-compose gebruiken, met deze configuratie als sjabloon:
```yaml
version: '3'
networks:
rustdesk-net:
external: false
services:
hbbs:
container_name: hbbs
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21118:21118
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
ports:
- 21117:21117
- 21119:21119
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
Bewerk regel 16 om te verwijzen naar uw relais-server (degene die luistert op poort 21117). U kunt ook de inhoudsregels (L18 en L33) bewerken indien nodig.
(docker-compose erkenning gaat naar @lukebarone en @QuiGonLeong)
## S6-overlay gebaseerde bestanden
Deze bestanden (images) zijn gebouwd tegen `busybox:stable` met toevoeging van de binaries (zowel hbbr als hbbs) en [S6-overlay](https://github.com/just-containers/s6-overlay). Ze zijn beschikbaar op [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) met deze tags:
| architectuur | versie | image:tag |
| --- | --- | --- |
| multiarch | latest | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | latest | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | latest | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | latest | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | latest | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
Je wordt sterk aangeraden om het `multiarch` bestand (image) te gebruiken met de `major version` of `latest` tag.
De S6-overlay fungeert als supervisor en houdt beide processen draaiende, dus met dit bestand (image) is het niet nodig om twee aparte draaiende containers te hebben.
U kunt deze bestanden (images) direct starten via `docker run` met dit commando:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
of zonder `--net=host`, maar een directe P2P verbinding zal niet werken.
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
-p 21117:21117 -p 21118:21118 -p 21119:21119 \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
Of u kunt een docker-compose bestand gebruiken:
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
volumes:
- ./data:/data
restart: unless-stopped
```
Voor dit container bestand (image) kunt u deze omgevingsvariabelen gebruiken, **naast** de variabelen in de volgende **ENV-variabelen** sectie:
| variabele | optioneel | beschrijving |
| --- | --- | --- |
| RELAY | no | het IP-adres/DNS-naam van de machine waarop deze container draait |
| ENCRYPTED_ONLY | yes | indien ingesteld op **"1"** wordt een niet-versleutelde verbinding niet geaccepteerd |
| KEY_PUB | yes | het openbare deel van het key paar |
| KEY_PRIV | yes | het private deel van het key paar |
### Geheim beheer in S6-overlay gebaseerde bestanden (images)
U kunt uiteraard het key paar bewaren in een docker volume, maar de optimale werkwijzen vertellen u om de keys niet op het bestandssysteem te schrijven; dus bieden we een paar opties.
Bij het opstarten van de container wordt de aanwezigheid van het key paar gecontroleerd (`/data/id_ed25519.pub` en `/data/id_ed25519`) en als een van deze keys niet bestaat, wordt deze opnieuw aangemaakt vanuit ENV variabelen of docker secrets.
Vervolgens wordt de geldigheid van het key paar gecontroleerd: indien publieke en private keys niet overeenkomen, stopt de container.
Als je geen keys opgeeft, zal `hbbs` er een voor je genereren en op de standaard locatie plaatsen.
#### Gebruik ENV om het key paar op te slaan
U kunt docker omgevingsvariabelen gebruiken om de keys op te slaan. Volg gewoon deze voorbeelden:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### Gebruik Docker secrets om het key paar op te slaan
U kunt ook docker secrets gebruiken om de keys op te slaan.
Dit is handig als je **docker-compose** of **docker swarm** gebruikt.
Volg deze voorbeelden:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## Hoe maak je een key paar
Een key paar is nodig voor encryptie; u kunt het verstrekken, zoals eerder uitgelegd, maar u heeft een manier nodig om er een te maken.
U kunt dit commando gebruiken om een key paar te genereren:
```bash
/usr/bin/rustdesk-utils genkeypair
```
Als u het pakket `rustdesk-utils` niet op uw systeem hebt staan (of wilt), kunt u hetzelfde commando met docker uitvoeren:
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
De uitvoer ziet er ongeveer zo uit:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## .deb pakketten
Voor elke binary zijn aparte .deb-pakketten beschikbaar, u kunt ze vinden in de [releases](https://github.com/rustdesk/rustdesk-server/releases).
Deze pakketten zijn bedoeld voor de volgende distributies:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 11 bullseye
- Debian 10 buster
## ENV variabelen
hbbs en hbbr kunnen worden geconfigureerd met deze ENV-variabelen.
U kunt de variabelen zoals gebruikelijk opgeven of een `.env` bestand gebruiken.
| variabele | binary | beschrijving |
| --- | --- | --- |
| ALWAYS_USE_RELAY | hbbs | indien ingesteld op **"Y"** wordt directe peer-verbinding niet toegestaan |
| DB_URL | hbbs | path voor database bestand |
| DOWNGRADE_START_CHECK | hbbr | vertraging (in seconden) voor downgrade-controle |
| DOWNGRADE_THRESHOLD | hbbr | drempel van downgrade controle (bit/ms) |
| KEY | hbbs/hbbr | indien ingesteld forceert dit het gebruik van een specifieke toets, indien ingesteld op **"_"** forceert dit het gebruik van een willekeurige toets |
| LIMIT_SPEED | hbbr | snelheidslimiet (in Mb/s) |
| PORT | hbbs/hbbr | luister-poort (21116 voor hbbs - 21117 voor hbbr) |
| RELAY_SERVERS | hbbs | IP-adres/DNS-naam van de machines waarop hbbr draait (gescheiden door komma) |
| RUST_LOG | all | debug-niveau instellen (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | maximale bandbreedte voor een enkele verbinding (in Mb/s) |
| TOTAL_BANDWIDTH | hbbr | maximale totale bandbreedte (in Mb/s) |

347
README-TW.md Normal file
View File

@@ -0,0 +1,347 @@
<p align="center">
<a href="#如何自行建置">自行建置</a> •
<a href="#Docker-映像檔">Docker</a> •
<a href="#基於-S6-overlay-的映象檔">S6-overlay</a> •
<a href="#如何建立金鑰對">金鑰對</a> •
<a href="#deb-套件">Debian</a> •
<a href="#ENV-環境參數">環境參數</a><br>
[<a href="README.md">English</a>] | [<a href="README-DE.md">Deutsch</a>] | [<a href="README-NL.md">Nederlands</a>] | [<a href="README-ZH.md">简体中文</a>]<br>
</p>
# RustDesk Server Program
[![build](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml/badge.svg)](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml)
[**下載**](https://github.com/rustdesk/rustdesk-server/releases)
[**說明文件**](https://rustdesk.com/docs/zh-tw/self-host/)
[**FAQ**](https://github.com/rustdesk/rustdesk/wiki/FAQ)
自行建置屬於您自己的 RustDesk 伺服器,它是免費的且開源。
## 如何自行建置
```bash
cargo build --release
```
在 target/release 中會產生三個可執行檔。
- hbbs - RustDesk ID/會合伺服器
- hbbr - RustDesk 中繼伺服器
- rustdesk-utils - RustDesk 命令行工具
您可以在 [releases](https://github.com/rustdesk/rustdesk-server/releases) 頁面上找到更新的執行檔。
如果您需要額外功能,[RustDesk 專業版伺服器](https://rustdesk.com/pricing.html) 或許更適合您。
如果您想開發自己的伺服器,[rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) 可能是一個比這個倉庫更好、更簡單的開始。
## Docker 映像檔
Docker 映像檔會在每次 GitHub 發布時自動生成並發布。我們有兩種映像檔。
### Classic 映像檔
這些映像檔是基於 `ubuntu-20.04` 建置的,僅添加了兩個主要的執行檔(`hbbr``hbbs`)。它們可在 [Docker Hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) 上取得帶有以下tags
| 架構 | image:tag |
| ------- | ----------------------------------------- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
您可以使用以下指令,直接透過 ``docker run`` 來啟動這些映像檔:
```bash
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
或刪去 `--net=host` 但 P2P 直接連線會無法運作。
對於使用 SELinux 的系統,需要將 ``/root`` 替換為 ``/root:z``,以便容器正確運行。或者,也可以通過添加選項 ``--security-opt label=disable`` 完全禁用 SELinux 容器隔離。
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
`relay-server-ip` 參數是執行這些容器的伺服器的 IP 地址(或 DNS 名稱)。如果您為 `hbbr` 使用的端口不是 **21117**,則必須使用 **可選** 的 `port` 參數。
您也可以使用 docker-compose 使用這個設定做為範例:
```yaml
version: '3'
networks:
rustdesk-net:
external: false
services:
hbbs:
container_name: hbbs
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21118:21118
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
ports:
- 21117:21117
- 21119:21119
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
請編輯第 16 行,將其指向您的中繼伺服器 (監聽端口 21117 那一個)。 如果需要的話,您也可以編輯 volume (第 18 和 33 行)。
(感謝 @lukebarone 和 @QuiGonLeong 協助提供 docker-compose 的設定範例)
## 基於 S6-overlay 的映象檔
這些映象檔是針對 `busybox:stable` 建置的並添加了執行檔hbbr 和 hbbs以及 [S6-overlay](https://github.com/just-containers/s6-overlay)。 它們在以及這些 tags 在 [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) 可用:
| 架構 | version | image:tag |
| --------- | ------- | -------------------------------------------- |
| multiarch | latest | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | latest | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | latest | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | latest | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | latest | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
強烈建議您使用 `multiarch` 映象檔 可以選擇使用 `major version` 或 `latest` tags。
S6-overlay 在此充當監督程序,保持兩個進程運行,因此使用此映象檔,您無需運行兩個獨立的容器。
您可以直接使用以下命令使用 `docker run` 來啟動這個映象檔:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
或刪去 `--net=host` 但 P2P 直接連線會無法運作。
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
-p 21117:21117 -p 21118:21118 -p 21119:21119 \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
或是您可以使用 docker-compose 文件:
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
volumes:
- ./data:/data
restart: unless-stopped
```
對於此容器映象檔,您可以使用這些環境變數,**除了**以下**環境變數**部分指定的那些。
| 環境變數 | 是否可選 | 敘述 |
| -------------- | -------- | ------------------------------------------ |
| RELAY | 否 | 運行此容器的機器的 IP 地址/ DNS 名稱 |
| ENCRYPTED_ONLY | 是 | 如果設置為 **"1"**,將不接受未加密的連接。 |
| KEY_PUB | 是 | 金鑰對中的公鑰Public Key |
| KEY_PRIV | 是 | 金鑰對中的私鑰Private Key |
### 在基於 S6-overlay 的 Secret 管理
您可以將金鑰對保存在 Docker volume 中,但最佳實踐建議不要將金鑰寫入文件系統;因此,我們提供了一些選項。
在容器啟動時,會檢查金鑰對的是否存在(`/data/id_ed25519.pub` 和 `/data/id_ed25519`),如果其中一個金鑰不存在,則會從環境變數或 Docker Secret 重新生成它。
然後檢查金鑰對的有效性:如果公鑰和私鑰不匹配,容器將停止運行。
如果您未提供金鑰,`hbbs` 將為您產生一個,並將其放置在默認位置。
#### 使用 ENV 存儲金鑰對
您可以使用 Docker 環境變數來儲存金鑰。只需按照以下範例操作:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### 使用 Docker Secret 來儲存金鑰對
您還可以使用 Docker Secret來儲存金鑰。
如果您使用 **docker-compose** 或 **docker swarm**,這很有用。
只需按照以下示例操作:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## 如何建立金鑰對
加密需要一對金鑰;您可以按照前面所述提供它,但需要一種生成金鑰對的方法。
您可以使用以下命令生成一對金鑰:
```bash
/usr/bin/rustdesk-utils genkeypair
```
如果您沒有(或不想)在系統上安裝 `rustdesk-utils` 套件,您可以使用 Docker執行相同的命令
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
輸出將類似於以下內容:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## .deb 套件
每個執行檔都有單獨的 .deb 套件可供使用,您可以在 [releases](https://github.com/rustdesk/rustdesk-server/releases) 中找到它們。
這些套件適用於以下發行版:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 11 bullseye
- Debian 10 buster
## ENV 環境參數
可以使用這些 ENV 參數來配置 hbbs 和 hbbr。
您可以像往常一樣指定參數,或者使用 .env 文件。
| 參數 | 執行檔 | 敘述 |
| --------------------- | --------- | -------------------------------------------------------------------- |
| ALWAYS_USE_RELAY | hbbs | 如果設為 **"Y"**,禁止直接點對點連接 |
| DB_URL | hbbs | 資料庫的路徑 |
| DOWNGRADE_START_CHECK | hbbr | 降級檢查之前的延遲時間(以秒為單位) |
| DOWNGRADE_THRESHOLD | hbbr | 降級檢查的閾值bit/ms |
| KEY | hbbs/hbbr | 如果設置了,將強制使用特定金鑰,如果設為 **"_"**,則強制使用任何金鑰 |
| LIMIT_SPEED | hbbr | 速度限制以Mb/s為單位 |
| PORT | hbbs/hbbr | 監聽端口hbbs為21116hbbr為21117 |
| RELAY_SERVERS | hbbs | 運行hbbr的機器的IP地址/DNS名稱用逗號分隔 |
| RUST_LOG | all | 設定 debug level (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | 單個連接的最大頻寬以Mb/s為單位 |
| TOTAL_BANDWIDTH | hbbr | 最大總頻寬以Mb/s為單位 |

348
README-ZH.md Normal file
View File

@@ -0,0 +1,348 @@
<p align="center">
<a href="#如何自行构建">自行构建</a> •
<a href="#Docker-镜像">Docker</a> •
<a href="#基于-S6-overlay-的镜像">S6-overlay</a> •
<a href="#如何创建密钥">密钥</a> •
<a href="#deb-套件">Debian</a> •
<a href="#ENV-环境参数">环境参数</a><br>
[<a href="README.md">English</a>] | [<a href="README-DE.md">Deutsch</a>] | [<a href="README-NL.md">Nederlands</a>] | [<a href="README-TW.md">繁体中文</a>]<br>
</p>
# RustDesk Server Program
[![build](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml/badge.svg)](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml)
[**下载**](https://github.com/rustdesk/rustdesk-server/releases)
[**说明文件**](https://rustdesk.com/docs/zh-cn/self-host/)
[**FAQ**](https://github.com/rustdesk/rustdesk/wiki/FAQ)
自行搭建属于你的RustDesk服务器,所有的一切都是免费且开源的
## 如何自行构建
```bash
cargo build --release
```
执行后会在target/release目录下生成三个对应平台的可执行程序
- hbbs - RustDesk ID/会和服务器
- hbbr - RustDesk 中继服务器
- rustdesk-utils - RustDesk 命令行工具
您可以在 [releases](https://github.com/rustdesk/rustdesk-server/releases) 页面中找到最新的服务端软件。
如果您需要额外的功能支持,[RustDesk 专业版服务器](https://rustdesk.com/pricing.html) 获取更适合您。
如果您想开发自己的服务器,[rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) 应该会比直接使用这个仓库更简单快捷。
## Docker 镜像
Docker镜像会在每次 GitHub 发布新的release版本时自动构建。我们提供两种类型的镜像。
### Classic 传统镜像
这个类型的镜像是基于 `ubuntu-20.04` 进行构建,镜像仅包含两个主要的可执行程序(`hbbr``hbbs`。它们可以通过以下tag在 [Docker Hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) 上获得:
| 架构 | image:tag |
|---------| ----------------------------------------- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
您可以使用以下命令,直接通过 ``docker run`` 來启动这些镜像:
```bash
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
或不使用 `--net=host` 参数启动, 但这样 P2P 直连功能将无法工作。
对于使用了 SELinux 的系统,您需要将 ``/root`` 替换为 ``/root:z``,以保证容器的正常运行。或者,也可以通过添加参数 ``--security-opt label=disable`` 来完全禁用 SELinux 容器隔离。
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
`relay-server-ip` 参数是运行这些容器的服务器的 IP 地址(或 DNS 名称)。如果你不想使用 **21117** 作为 `hbbr` 的服务端口,可使用可选参数 `port` 进行指定。
您也可以使用 docker-compose 进行构建,以下为配置示例:
```yaml
version: '3'
networks:
rustdesk-net:
external: false
services:
hbbs:
container_name: hbbs
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21118:21118
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
ports:
- 21117:21117
- 21119:21119
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
编辑第16行来指定你的中继服务器 (默认端口监听在 21117 的那一个)。 如果需要的话,您也可以编辑 volume 信息 (第 18 和 33 行)。
(感谢 @lukebarone 和 @QuiGonLeong 协助提供的 docker-compose 配置示例)
## 基于 S6-overlay 的镜像
> 这些镜像是针对 `busybox:stable` 构建的并添加了可执行程序hbbr 和 hbbs以及 [S6-overlay](https://github.com/just-containers/s6-overlay)。 它们可以使用以下tag在 [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) 上获取:
| 架構 | version | image:tag |
| --------- | ------- | -------------------------------------------- |
| multiarch | latest | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | latest | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | latest | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | latest | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | latest | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
强烈建议您使用`major version` 或 `latest` tag 的 `multiarch` 架构的镜像。
S6-overlay 在此处作为监控程序,用以保证两个进程的运行,因此使用此镜像,您无需运行两个容器。
您可以使用 `docker run` 命令直接启动镜像,如下:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
或刪去 `--net=host` 参数, 但 P2P 直连功能将无法工作。
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
-p 21117:21117 -p 21118:21118 -p 21119:21119 \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
或着您也可以使用 docker-compose 文件:
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
volumes:
- ./data:/data
restart: unless-stopped
```
对于此容器镜像,除了在下面的环境变量部分指定的变量之外,您还可以使用以下`环境变量`
| 环境变量 | 是否可选 | 描述 |
|----------------|------|--------------------------|
| RELAY | 否 | 运行此容器的宿主机的 IP 地址/ DNS 名称 |
| ENCRYPTED_ONLY | 是 | 如果设置为 **"1"**,将不接受未加密的连接。 |
| KEY_PUB | 是 | 密钥对中的公钥Public Key |
| KEY_PRIV | 是 | 密钥对中的私钥Private Key |
### 基于 S6-overlay 镜像的密钥管理
您可以将密钥对保存在 Docker volume 中,但我们建议不要将密钥写入文件系統中;因此,我们提供了一些方案。
在容器启动时,会检查密钥对是否存在(`/data/id_ed25519.pub` 和 `/data/id_ed25519`),如果其中一個密钥不存在,则会从环境变量或 Docker Secret 中重新生成它。
然后检查密钥对的可用性:如果公钥和私钥不匹配,容器将停止运行。
如果您未提供密钥,`hbbs` 将会在默认位置生成一个。
#### 使用 ENV 存储密钥对
您可以使用 Docker 环境变量來存储密钥。如下:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### 使用 Docker Secret 來保存密钥对
您还可以使用 Docker Secret 來保存密钥。
如果您使用 **docker-compose** 或 **docker swarm**,推荐您使用。
只需按照以下示例操作:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## 如何生成密钥对
加密需要一对密钥;您可以按照前面所述提供它,但需要一个工具去生成密钥对。
您可以使用以下命令生成一对密钥:
```bash
/usr/bin/rustdesk-utils genkeypair
```
如果您沒有(或不想)在系统上安装 `rustdesk-utils` 套件,您可以使用 Docker 执行相同的命令:
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
运行后的输出内容如下:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## .deb 套件
每个可执行文件都有单独的 .deb 套件可供使用,您可以在 [releases](https://github.com/rustdesk/rustdesk-server/releases) 页面中找到它們。
這些套件适用于以下发行版:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 11 bullseye
- Debian 10 buster
## ENV 环境变量
可以使用这些`环境变量`参数來配置 hbbs 和 hbbr。
您可以像往常一样指定参数,或者使用 .env 文件。
| 参数 | 可执行文件 | 描述 |
|-----------------------|---------------|--------------------------------------------------|
| ALWAYS_USE_RELAY | hbbs | 如果设定为 **"Y"**,将关闭直接点对点连接功能 |
| DB_URL | hbbs | 数据库配置 |
| DOWNGRADE_START_CHECK | hbbr | 降级检查之前的延迟是啊尽(以秒为单位) |
| DOWNGRADE_THRESHOLD | hbbr | 降级检查的阈值bit/ms |
| KEY | hbbs/hbbr | 如果设置了此参数,将强制使用指定密钥对,如果设为 **"_"**,则强制使用任意密钥 |
| LIMIT_SPEED | hbbr | 速度限制以Mb/s为单位 |
| PORT | hbbs/hbbr | 监听端口hbbs为21116hbbr为21117 |
| RELAY_SERVERS | hbbs | 运行hbbr的机器的IP地址/DNS名称用逗号分隔 |
| RUST_LOG | all | 设置 debug level (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | 单个连接的最大带宽以Mb/s为单位 |
| TOTAL_BANDWIDTH | hbbr | 最大总带宽以Mb/s为单位 |

263
README.md
View File

@@ -1,3 +1,13 @@
<p align="center">
<a href="#how-to-build-manually">Manually</a> •
<a href="#docker-images">Docker</a> •
<a href="#s6-overlay-based-images">S6-overlay</a> •
<a href="#how-to-create-a-keypair">Keypair</a> •
<a href="#deb-packages">Debian</a> •
<a href="#env-variables">Variables</a><br>
[<a href="README-DE.md">Deutsch</a>] | [<a href="README-NL.md">Nederlands</a>] | [<a href="README-TW.md">繁體中文</a>] | [<a href="README-ZH.md">简体中文</a>]<br>
</p>
# RustDesk Server Program
[![build](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml/badge.svg)](https://github.com/rustdesk/rustdesk-server/actions/workflows/build.yaml)
@@ -16,33 +26,52 @@ Self-host your own RustDesk server, it is free and open source.
cargo build --release
```
Two executables will be generated in target/release.
Three executables will be generated in target/release.
- hbbs - RustDesk ID/Rendezvous server
- hbbr - RustDesk relay server
- rustdesk-utils - RustDesk CLI utilities
You can find updated binaries on the [releases](https://github.com/rustdesk/rustdesk-server/releases) page.
You can find updated binaries on the [Releases](https://github.com/rustdesk/rustdesk-server/releases) page.
If you wanna develop your own server, [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) might be a better and simpler start for you than this repo.
If you want extra features, [RustDesk Server Pro](https://rustdesk.com/pricing.html) might suit you better.
If you want to develop your own server, [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) might be a better and simpler start for you than this repo.
## Docker images
Docker images are automatically generated and published on every github release. We have 2 kind of images.
Docker images are automatically generated and published to [Docker Hub](https://hub.docker.com/r/rustdesk) and [GitHub Container Registry](https://github.com/rustdesk?tab=packages&repo_name=rustdesk-server) on every GitHub release. We have 2 kind of images.
### Classic image
These images are build against `ubuntu-20.04` with the only addition of the binaries (both hbbr and hbbs). They're available on [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) with these tags:
These images are built from scratch with two main binaries (`hbbs` and `hbbr`). They're available on [Docker Hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) and [GitHub Container Registry](https://github.com/rustdesk/rustdesk-server/pkgs/container/rustdesk-server) with these architectures:
* amd64
* arm64v8
* armv7
You could use `latest` tag or major version tag `1` with supported architectures:
| Version | image:tag |
| ------------- | --------------------------------- |
| latest | `rustdesk/rustdesk-server:latest` |
| Major version | `rustdesk/rustdesk-server:1` |
| architecture | image:tag |
| --- | --- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
You can start these images directly with `docker run` with these commands:
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD:/root" -d rustdesk/rustdesk-server:latest hbbr
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
or without `--net=host`, but P2P direct connection can not work.
For systems using SELinux, replacing `/root` by `/root:z` is required for the containers to run correctly. Alternatively, SELinux container separation can be disabled completely adding the option `--security-opt label=disable`.
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
The `relay-server-ip` parameter is the IP address (or dns name) of the server running these containers. The **optional** `port` parameter has to be used if you use a port different than **21117** for `hbbr`.
@@ -67,7 +96,7 @@ services:
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./hbbs:/root
- ./data:/root
networks:
- rustdesk-net
depends_on:
@@ -82,44 +111,52 @@ services:
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./hbbr:/root
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
Edit line 16 to point to your relay server (the one listening on port 21117). You can also edit the volume lines (L18 and L33) if you need.
Edit line 16 to point to your relay server (the one listening on port 21117). You can also edit the volume lines (line 18 and line 33) if you need.
(docker-compose credit goes to @lukebarone and @QuiGonLeong)
> [!NOTE]
> The rustdesk/rustdesk-server:latest in China may be replaced with the latest version number on Docker Hub, such as `rustdesk-server:1.1.10-3`. Otherwise, the old version may be pulled due to image acceleration.
> [!NOTE]
> If you are experiencing issues pulling from Docker Hub, try pulling from the [GitHub Container Registry](https://github.com/rustdesk/rustdesk-server/pkgs/container/rustdesk-server) instead.
## S6-overlay based images
These images are build against `busybox:stable` with the addition of the binaries (both hbbr and hbbs) and [S6-overlay](https://github.com/just-containers/s6-overlay). They're available on [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) with these tags:
These images are build against `busybox:stable` with the addition of the binaries (both `hbbs` and `hbbr`) and [S6-overlay](https://github.com/just-containers/s6-overlay). They're available on [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) and [GitHub Container Registry](https://github.com/rustdesk/rustdesk-server/pkgs/container/rustdesk-server) with these architectures:
| architecture | version | image:tag |
| --- | --- | --- |
| multiarch | latest | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | latest | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | latest | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | latest | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | latest | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
* amd64
* i386
* arm64v8
* armv7
You're strongly encuraged to use the `multiarch` image either with the `major version` or `latest` tag.
You could use `latest` tag or major version tag `1` with supported architectures:
The S6-overlay acts as a supervisor and keeps both process running, so with this image there's no need to have two separate running containers.
| Version | image:tag |
| ------------- | ------------------------------------ |
| latest | `rustdesk/rustdesk-server-s6:latest` |
| Major version | `rustdesk/rustdesk-server-s6:1` |
The S6-overlay acts as a supervisor and keeps both process running, so with this image, there's no need to have two separate running containers.
You can start these images directly with `docker run` with this command:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
or without `--net=host`, but P2P direct connection cannot work.
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
@@ -153,9 +190,165 @@ services:
restart: unless-stopped
```
We use these environment variables:
For this container image, you can use these environment variables, **in addition** to the ones specified in the following **ENV variables** section:
| variable | optional | description |
| --- | --- | --- |
| RELAY | no | the IP address/DNS name of the machine running this container |
| ENCRYPTED_ONLY | yes | if set to **"1"** unencrypted connection will not be accepted |
| KEY_PUB | yes | public part of the key pair |
| KEY_PRIV | yes | private part of the key pair |
### Secret management in S6-overlay based images
You can obviously keep the key pair in a docker volume, but the best practices tells you to not write the keys on the filesystem; so we provide a couple of options.
On container startup, the presence of the keypair is checked (`/data/id_ed25519.pub` and `/data/id_ed25519`) and if one of these keys doesn't exist, it's recreated from ENV variables or docker secrets.
Then the validity of the keypair is checked: if public and private keys doesn't match, the container will stop.
If you provide no keys, `hbbs` will generate one for you, and it'll place it in the default location.
#### Use ENV to store the key pair
You can use docker environment variables to store the keys. Just follow this examples:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### Use Docker secrets to store the key pair
You can alternatively use docker secrets to store the keys.
This is useful if you're using **docker-compose** or **Docker Swarm**.
Just follow this examples:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## How to create a keypair
A keypair is needed for encryption; you can provide it, as explained before, but you need a way to create one.
You can use this command to generate a keypair:
```bash
/usr/bin/rustdesk-utils genkeypair
```
If you don't have (or don't want) the `rustdesk-utils` package installed on your system, you can invoke the same command with docker:
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
The output will be something like this:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## .deb packages
Separate .deb packages are available for each binary, you can find them in the [Releases](https://github.com/rustdesk/rustdesk-server/releases).
These packages are meant for the following distributions:
- Ubuntu 24.04 LTS
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 12 bookworm
- Debian 11 bullseye
- Debian 10 buster
## ENV variables
`hbbs` and `hbbr` can be configured using these ENV variables.
You can specify the variables as usual or use an `.env` file.
| variable | binary | description |
| --- | --- | --- |
| ALWAYS_USE_RELAY | hbbs | if set to **"Y"** disallows direct peer connection |
| DB_URL | hbbs | path for database file |
| DOWNGRADE_START_CHECK | hbbr | delay (in seconds) before downgrade check |
| DOWNGRADE_THRESHOLD | hbbr | threshold of downgrade check (bit/ms) |
| KEY | hbbs/hbbr | if set force the use of a specific key, if set to **"_"** force the use of any key |
| LIMIT_SPEED | hbbr | speed limit (in Mb/s) |
| PORT | hbbs/hbbr | listening port (21116 for hbbs - 21117 for hbbr) |
| RELAY | hbbs | IP address/DNS name of the machines running hbbr (separated by comma) |
| RUST_LOG | all | set debug level (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | max bandwidth for a single connection (in Mb/s) |
| TOTAL_BANDWIDTH | hbbr | max total bandwidth (in Mb/s) |

Binary file not shown.

41
debian/changelog vendored Normal file
View File

@@ -0,0 +1,41 @@
rustdesk-server (1.1.13) UNRELEASED; urgency=medium
* Version check and refactor hbb_common to share with rustdesk client
rustdesk-server (1.1.12) UNRELEASED; urgency=medium
* WS real ip
* Bump s6-overlay to v3.2.0.0 and fix env warnings
rustdesk-server (1.1.11-1) UNRELEASED; urgency=medium
* set reuse port to make restart friendly
* revert hbbr `-k` to not ruin back-compatibility
rustdesk-server (1.1.11) UNRELEASED; urgency=medium
* change -k to default '-', so you need not to set -k any more
rustdesk-server (1.1.10-3) UNRELEASED; urgency=medium
* fix on -2
rustdesk-server (1.1.10-2) UNRELEASED; urgency=medium
* fix hangup signal exit when run with nohup
* some minors
rustdesk-server (1.1.9) UNRELEASED; urgency=medium
* remove unsafe
rustdesk-server (1.1.8) UNRELEASED; urgency=medium
* fix test_hbbs and mask in lan
rustdesk-server (1.1.7) UNRELEASED; urgency=medium
* ipv6 support
-- rustdesk <info@rustdesk.com> Wed, 11 Jan 2023 11:27:00 +0800
rustdesk-server (1.1.6) UNRELEASED; urgency=medium
* Initial release
-- open-trade <info@rustdesk.com> Fri, 15 Jul 2022 12:27:27 +0200

1
debian/compat vendored Normal file
View File

@@ -0,0 +1 @@
10

27
debian/control.tpl vendored Normal file
View File

@@ -0,0 +1,27 @@
Source: rustdesk-server
Section: net
Priority: optional
Maintainer: open-trade <info@rustdesk.com>
Build-Depends: debhelper (>= 10), pkg-config
Standards-Version: 4.5.0
Homepage: https://rustdesk.com/
Package: rustdesk-server-hbbs
Architecture: {{ ARCH }}
Depends: systemd ${misc:Depends}
Description: RustDesk server
Self-host your own RustDesk server, it is free and open source.
Package: rustdesk-server-hbbr
Architecture: {{ ARCH }}
Depends: systemd ${misc:Depends}
Description: RustDesk server
Self-host your own RustDesk server, it is free and open source.
This package contains the RustDesk relay server.
Package: rustdesk-server-utils
Architecture: {{ ARCH }}
Depends: ${misc:Depends}
Description: RustDesk server
Self-host your own RustDesk server, it is free and open source.
This package contains the rustdesk-utils binary.

679
debian/copyright vendored Normal file
View File

@@ -0,0 +1,679 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rustdesk-server
Files: *
Copyright: Copyright 2022 open-trade <info@rustdesk.com>
License: AGPL-3.0
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
.
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
.
Preamble
.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
.
The precise terms and conditions for copying, distribution and
modification follow.
.
TERMS AND CONDITIONS
.
0. Definitions.
.
"This License" refers to version 3 of the GNU Affero General Public License.
.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
.
A "covered work" means either the unmodified Program or a work based
on the Program.
.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
.
1. Source Code.
.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
.
The Corresponding Source for a work in source code form is that
same work.
.
2. Basic Permissions.
.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
.
4. Conveying Verbatim Copies.
.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
.
5. Conveying Modified Source Versions.
.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
.
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
.
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
.
6. Conveying Non-Source Forms.
.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
.
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
.
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
.
7. Additional Terms.
.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
.
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
.
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
.
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
.
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
.
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
.
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
.
8. Termination.
.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
.
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
.
9. Acceptance Not Required for Having Copies.
.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
.
10. Automatic Licensing of Downstream Recipients.
.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
.
11. Patents.
.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
.
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
.
12. No Surrender of Others' Freedom.
.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
.
13. Remote Network Interaction; Use with the GNU General Public License.
.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
.
14. Revised Versions of this License.
.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
.
15. Disclaimer of Warranty.
.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
.
16. Limitation of Liability.
.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
.
17. Interpretation of Sections 15 and 16.
.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
.
END OF TERMS AND CONDITIONS
.
How to Apply These Terms to Your New Programs
.
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
Also add information on how to contact you by electronic and paper mail.
.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>.

6
debian/rules vendored Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/make -f
%:
dh $@
override_dh_builddeb:
dh_builddeb -- -Zgzip

2
debian/rustdesk-server-hbbr.install vendored Normal file
View File

@@ -0,0 +1,2 @@
bin/hbbr usr/bin
systemd/rustdesk-hbbr.service lib/systemd/system

28
debian/rustdesk-server-hbbr.postinst vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbr.service
if [ "$1" = "configure" ]; then
mkdir -p /var/log/rustdesk-server
fi
case "$1" in
configure|abort-upgrade|abort-deconfigure|abort-remove)
mkdir -p /var/lib/rustdesk-server/
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
if deb-systemd-helper --quiet was-enabled "${SERVICE}"; then
deb-systemd-invoke enable "${SERVICE}" >/dev/null || true
else
deb-systemd-invoke update-state "${SERVICE}" >/dev/null || true
fi
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
deb-systemd-invoke restart "${SERVICE}" >/dev/null || true
else
deb-systemd-invoke start "${SERVICE}" >/dev/null || true
fi
;;
esac
exit 0

18
debian/rustdesk-server-hbbr.postrm vendored Normal file
View File

@@ -0,0 +1,18 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbr.service
systemctl --system daemon-reload >/dev/null || true
if [ "$1" = "purge" ]; then
rm -rf /var/log/rustdesk-server/rustdesk-hbbr.*
deb-systemd-helper purge "${SERVICE}" >/dev/null || true
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
fi
if [ "$1" = "remove" ]; then
deb-systemd-helper mask "${SERVICE}" >/dev/null || true
fi
exit 0

13
debian/rustdesk-server-hbbr.prerm vendored Normal file
View File

@@ -0,0 +1,13 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbr.service
case "$1" in
remove|deconfigure)
deb-systemd-invoke stop "${SERVICE}" >/dev/null || true
deb-systemd-invoke disable "${SERVICE}" >/dev/null || true
;;
esac
exit 0

2
debian/rustdesk-server-hbbs.install vendored Normal file
View File

@@ -0,0 +1,2 @@
bin/hbbs usr/bin
systemd/rustdesk-hbbs.service lib/systemd/system

28
debian/rustdesk-server-hbbs.postinst vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbs.service
if [ "$1" = "configure" ]; then
mkdir -p /var/log/rustdesk-server
fi
case "$1" in
configure|abort-upgrade|abort-deconfigure|abort-remove)
mkdir -p /var/lib/rustdesk-server/
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
if deb-systemd-helper --quiet was-enabled "${SERVICE}"; then
deb-systemd-invoke enable "${SERVICE}" >/dev/null || true
else
deb-systemd-invoke update-state "${SERVICE}" >/dev/null || true
fi
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
deb-systemd-invoke restart "${SERVICE}" >/dev/null || true
else
deb-systemd-invoke start "${SERVICE}" >/dev/null || true
fi
;;
esac
exit 0

18
debian/rustdesk-server-hbbs.postrm vendored Normal file
View File

@@ -0,0 +1,18 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbs.service
systemctl --system daemon-reload >/dev/null || true
if [ "$1" = "purge" ]; then
rm -rf /var/lib/rustdesk-server/ /var/log/rustdesk-server/rustdesk-hbbs.*
deb-systemd-helper purge "${SERVICE}" >/dev/null || true
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
fi
if [ "$1" = "remove" ]; then
deb-systemd-helper mask "${SERVICE}" >/dev/null || true
fi
exit 0

13
debian/rustdesk-server-hbbs.prerm vendored Normal file
View File

@@ -0,0 +1,13 @@
#!/bin/sh
set -e
SERVICE=rustdesk-hbbs.service
case "$1" in
remove|deconfigure)
deb-systemd-invoke stop "${SERVICE}" >/dev/null || true
deb-systemd-invoke disable "${SERVICE}" >/dev/null || true
;;
esac
exit 0

1
debian/rustdesk-server-utils.install vendored Normal file
View File

@@ -0,0 +1 @@
bin/rustdesk-utils usr/bin

1
debian/source/format vendored Normal file
View File

@@ -0,0 +1 @@
3.0 (native)

View File

@@ -1,4 +1,4 @@
FROM ubuntu:20.04
FROM scratch
COPY hbbs /usr/bin/hbbs
COPY hbbr /usr/bin/hbbr
WORKDIR /root

View File

@@ -15,7 +15,7 @@ services:
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./hbbs:/root
- ./data:/root
networks:
- rustdesk-net
depends_on:
@@ -30,7 +30,7 @@ services:
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./hbbr:/root
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped

View File

@@ -1,18 +1,19 @@
FROM busybox:stable
ARG S6_OVERLAY_VERSION=3.1.1.2
ARG S6_OVERLAY_VERSION=3.2.0.0
ARG S6_ARCH=x86_64
ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp
ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-${S6_ARCH}.tar.xz /tmp
RUN \
tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz && \
tar -C / -Jxpf /tmp/s6-overlay-${S6_ARCH}.tar.xz && \
rm /tmp/s6-overlay*.tar.xz
rm /tmp/s6-overlay*.tar.xz && \
ln -s /run /var/run
COPY rootfs /
ENV RELAY relay.example.com
ENV ENCRYPTED_ONLY 0
ENV RELAY=relay.example.com
ENV ENCRYPTED_ONLY=0
EXPOSE 21115 21116 21116/udp 21117 21118 21119

View File

@@ -0,0 +1 @@
key-secret

View File

@@ -1,3 +1,5 @@
#!/command/execlineb -P
posix-cd /data
/usr/bin/hbbr
#!/command/with-contenv sh
cd /data
PARAMS=
[ "${ENCRYPTED_ONLY}" = "1" ] && PARAMS="-k _"
/usr/bin/hbbr $PARAMS

View File

@@ -0,0 +1,2 @@
key-secret
hbbr

View File

@@ -1,4 +1,5 @@
#!/command/with-contenv sh
sleep 2
cd /data
PARAMS=
[ "${ENCRYPTED_ONLY}" = "1" ] && PARAMS="-k _"

View File

@@ -0,0 +1 @@
oneshot

View File

@@ -0,0 +1 @@
/etc/s6-overlay/s6-rc.d/key-secret/up.real

View File

@@ -0,0 +1,58 @@
#!/command/with-contenv sh
if [ ! -d /data ] ; then
mkdir /data
fi
# normal docker secrets
if [ ! -f /data/id_ed25519.pub ] && [ -r /run/secrets/key_pub ] ; then
cp /run/secrets/key_pub /data/id_ed25519.pub
echo "Public key created from secret"
fi
if [ ! -f /data/id_ed25519 ] && [ -r /run/secrets/key_priv ] ; then
cp /run/secrets/key_priv /data/id_ed25519
echo "Private key created from secret"
fi
# ENV variables
if [ ! -f /data/id_ed25519.pub ] && [ ! "$KEY_PUB" = "" ] ; then
echo -n "$KEY_PUB" > /data/id_ed25519.pub
echo "Public key created from ENV variable"
fi
if [ ! -f /data/id_ed25519 ] && [ ! "$KEY_PRIV" = "" ] ; then
echo -n "$KEY_PRIV" > /data/id_ed25519
echo "Private key created from ENV variable"
fi
# check if both keys provided
if [ -f /data/id_ed25519.pub ] && [ ! -f /data/id_ed25519 ] ; then
echo "Private key missing."
echo "You must provide BOTH the private and the public key."
/run/s6/basedir/bin/halt
exit 1
fi
if [ ! -f /data/id_ed25519.pub ] && [ -f /data/id_ed25519 ] ; then
echo "Public key missing."
echo "You must provide BOTH the private and the public key."
/run/s6/basedir/bin/halt
exit 1
fi
# here we have either no keys or both
# if we have both keys, we fix permissions and ownership
# and check for keypair validation
if [ -f /data/id_ed25519.pub ] && [ -f /data/id_ed25519 ] ; then
chmod 0600 /data/id_ed25519.pub /data/id_ed25519
chown root:root /data/id_ed25519.pub /data/id_ed25519
/usr/bin/rustdesk-utils validatekeypair "$(cat /data/id_ed25519.pub)" "$(cat /data/id_ed25519)" || {
echo "Key pair not valid"
/run/s6/basedir/bin/halt
exit 1
}
fi
# if we have no keypair, hbbs will generate one

1
libs/hbb_common Submodule

Submodule libs/hbb_common added at 49c6b24a7a

View File

@@ -1,4 +0,0 @@
/target
**/*.rs.bk
Cargo.lock
src/protos/

View File

@@ -1,48 +0,0 @@
[package]
name = "hbb_common"
version = "0.1.0"
authors = ["open-trade <info@opentradesolutions.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
protobuf = "3.0.0-alpha.2"
tokio = { version = "1.15", features = ["full"] }
tokio-util = { version = "0.6", features = ["full"] }
futures = "0.3"
bytes = "1.1"
log = "0.4"
env_logger = "0.9"
socket2 = { version = "0.3", features = ["reuseport"] }
zstd = "0.9"
quinn = {version = "0.8", optional = true }
anyhow = "1.0"
futures-util = "0.3"
directories-next = "2.0"
rand = "0.8"
serde_derive = "1.0"
serde = "1.0"
lazy_static = "1.4"
confy = { git = "https://github.com/open-trade/confy" }
dirs-next = "2.0"
filetime = "0.2"
sodiumoxide = "0.2"
regex = "1.4"
tokio-socks = { git = "https://github.com/open-trade/tokio-socks" }
[target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies]
mac_address = "1.1"
[features]
quic = []
[build-dependencies]
protobuf-codegen-pure = "3.0.0-alpha.2"
[target.'cfg(target_os = "windows")'.dependencies]
winapi = { version = "0.3", features = ["winuser"] }
[dev-dependencies]
toml = "0.5"
serde_json = "1.0"

View File

@@ -1,9 +0,0 @@
fn main() {
std::fs::create_dir_all("src/protos").unwrap();
protobuf_codegen_pure::Codegen::new()
.out_dir("src/protos")
.inputs(&["protos/rendezvous.proto", "protos/message.proto"])
.include("protos")
.run()
.expect("Codegen failed.");
}

View File

@@ -1,481 +0,0 @@
syntax = "proto3";
package hbb;
message VP9 {
bytes data = 1;
bool key = 2;
int64 pts = 3;
}
message VP9s { repeated VP9 frames = 1; }
message RGB { bool compress = 1; }
// planes data send directly in binary for better use arraybuffer on web
message YUV {
bool compress = 1;
int32 stride = 2;
}
message VideoFrame {
oneof union {
VP9s vp9s = 6;
RGB rgb = 7;
YUV yuv = 8;
}
}
message IdPk {
string id = 1;
bytes pk = 2;
}
message DisplayInfo {
sint32 x = 1;
sint32 y = 2;
int32 width = 3;
int32 height = 4;
string name = 5;
bool online = 6;
}
message PortForward {
string host = 1;
int32 port = 2;
}
message FileTransfer {
string dir = 1;
bool show_hidden = 2;
}
message LoginRequest {
string username = 1;
bytes password = 2;
string my_id = 4;
string my_name = 5;
OptionMessage option = 6;
oneof union {
FileTransfer file_transfer = 7;
PortForward port_forward = 8;
}
bool video_ack_required = 9;
}
message ChatMessage { string text = 1; }
message PeerInfo {
string username = 1;
string hostname = 2;
string platform = 3;
repeated DisplayInfo displays = 4;
int32 current_display = 5;
bool sas_enabled = 6;
string version = 7;
int32 conn_id = 8;
}
message LoginResponse {
oneof union {
string error = 1;
PeerInfo peer_info = 2;
}
}
message MouseEvent {
int32 mask = 1;
sint32 x = 2;
sint32 y = 3;
repeated ControlKey modifiers = 4;
}
enum ControlKey {
Unknown = 0;
Alt = 1;
Backspace = 2;
CapsLock = 3;
Control = 4;
Delete = 5;
DownArrow = 6;
End = 7;
Escape = 8;
F1 = 9;
F10 = 10;
F11 = 11;
F12 = 12;
F2 = 13;
F3 = 14;
F4 = 15;
F5 = 16;
F6 = 17;
F7 = 18;
F8 = 19;
F9 = 20;
Home = 21;
LeftArrow = 22;
/// meta key (also known as "windows"; "super"; and "command")
Meta = 23;
/// option key on macOS (alt key on Linux and Windows)
Option = 24; // deprecated, use Alt instead
PageDown = 25;
PageUp = 26;
Return = 27;
RightArrow = 28;
Shift = 29;
Space = 30;
Tab = 31;
UpArrow = 32;
Numpad0 = 33;
Numpad1 = 34;
Numpad2 = 35;
Numpad3 = 36;
Numpad4 = 37;
Numpad5 = 38;
Numpad6 = 39;
Numpad7 = 40;
Numpad8 = 41;
Numpad9 = 42;
Cancel = 43;
Clear = 44;
Menu = 45; // deprecated, use Alt instead
Pause = 46;
Kana = 47;
Hangul = 48;
Junja = 49;
Final = 50;
Hanja = 51;
Kanji = 52;
Convert = 53;
Select = 54;
Print = 55;
Execute = 56;
Snapshot = 57;
Insert = 58;
Help = 59;
Sleep = 60;
Separator = 61;
Scroll = 62;
NumLock = 63;
RWin = 64;
Apps = 65;
Multiply = 66;
Add = 67;
Subtract = 68;
Decimal = 69;
Divide = 70;
Equals = 71;
NumpadEnter = 72;
RShift = 73;
RControl = 74;
RAlt = 75;
CtrlAltDel = 100;
LockScreen = 101;
}
message KeyEvent {
bool down = 1;
bool press = 2;
oneof union {
ControlKey control_key = 3;
uint32 chr = 4;
uint32 unicode = 5;
string seq = 6;
}
repeated ControlKey modifiers = 8;
}
message CursorData {
uint64 id = 1;
sint32 hotx = 2;
sint32 hoty = 3;
int32 width = 4;
int32 height = 5;
bytes colors = 6;
}
message CursorPosition {
sint32 x = 1;
sint32 y = 2;
}
message Hash {
string salt = 1;
string challenge = 2;
}
message Clipboard {
bool compress = 1;
bytes content = 2;
}
enum FileType {
Dir = 0;
DirLink = 2;
DirDrive = 3;
File = 4;
FileLink = 5;
}
message FileEntry {
FileType entry_type = 1;
string name = 2;
bool is_hidden = 3;
uint64 size = 4;
uint64 modified_time = 5;
}
message FileDirectory {
int32 id = 1;
string path = 2;
repeated FileEntry entries = 3;
}
message ReadDir {
string path = 1;
bool include_hidden = 2;
}
message ReadAllFiles {
int32 id = 1;
string path = 2;
bool include_hidden = 3;
}
message FileAction {
oneof union {
ReadDir read_dir = 1;
FileTransferSendRequest send = 2;
FileTransferReceiveRequest receive = 3;
FileDirCreate create = 4;
FileRemoveDir remove_dir = 5;
FileRemoveFile remove_file = 6;
ReadAllFiles all_files = 7;
FileTransferCancel cancel = 8;
}
}
message FileTransferCancel { int32 id = 1; }
message FileResponse {
oneof union {
FileDirectory dir = 1;
FileTransferBlock block = 2;
FileTransferError error = 3;
FileTransferDone done = 4;
}
}
message FileTransferBlock {
int32 id = 1;
sint32 file_num = 2;
bytes data = 3;
bool compressed = 4;
}
message FileTransferError {
int32 id = 1;
string error = 2;
sint32 file_num = 3;
}
message FileTransferSendRequest {
int32 id = 1;
string path = 2;
bool include_hidden = 3;
}
message FileTransferDone {
int32 id = 1;
sint32 file_num = 2;
}
message FileTransferReceiveRequest {
int32 id = 1;
string path = 2; // path written to
repeated FileEntry files = 3;
}
message FileRemoveDir {
int32 id = 1;
string path = 2;
bool recursive = 3;
}
message FileRemoveFile {
int32 id = 1;
string path = 2;
sint32 file_num = 3;
}
message FileDirCreate {
int32 id = 1;
string path = 2;
}
// main logic from freeRDP
message CliprdrMonitorReady {
int32 conn_id = 1;
}
message CliprdrFormat {
int32 conn_id = 1;
int32 id = 2;
string format = 3;
}
message CliprdrServerFormatList {
int32 conn_id = 1;
repeated CliprdrFormat formats = 2;
}
message CliprdrServerFormatListResponse {
int32 conn_id = 1;
int32 msg_flags = 2;
}
message CliprdrServerFormatDataRequest {
int32 conn_id = 1;
int32 requested_format_id = 2;
}
message CliprdrServerFormatDataResponse {
int32 conn_id = 1;
int32 msg_flags = 2;
bytes format_data = 3;
}
message CliprdrFileContentsRequest {
int32 conn_id = 1;
int32 stream_id = 2;
int32 list_index = 3;
int32 dw_flags = 4;
int32 n_position_low = 5;
int32 n_position_high = 6;
int32 cb_requested = 7;
bool have_clip_data_id = 8;
int32 clip_data_id = 9;
}
message CliprdrFileContentsResponse {
int32 conn_id = 1;
int32 msg_flags = 3;
int32 stream_id = 4;
bytes requested_data = 5;
}
message Cliprdr {
oneof union {
CliprdrMonitorReady ready = 1;
CliprdrServerFormatList format_list = 2;
CliprdrServerFormatListResponse format_list_response = 3;
CliprdrServerFormatDataRequest format_data_request = 4;
CliprdrServerFormatDataResponse format_data_response = 5;
CliprdrFileContentsRequest file_contents_request = 6;
CliprdrFileContentsResponse file_contents_response = 7;
}
}
message SwitchDisplay {
int32 display = 1;
sint32 x = 2;
sint32 y = 3;
int32 width = 4;
int32 height = 5;
}
message PermissionInfo {
enum Permission {
Keyboard = 0;
Clipboard = 2;
Audio = 3;
File = 4;
}
Permission permission = 1;
bool enabled = 2;
}
enum ImageQuality {
NotSet = 0;
Low = 2;
Balanced = 3;
Best = 4;
}
message OptionMessage {
enum BoolOption {
NotSet = 0;
No = 1;
Yes = 2;
}
ImageQuality image_quality = 1;
BoolOption lock_after_session_end = 2;
BoolOption show_remote_cursor = 3;
BoolOption privacy_mode = 4;
BoolOption block_input = 5;
int32 custom_image_quality = 6;
BoolOption disable_audio = 7;
BoolOption disable_clipboard = 8;
BoolOption enable_file_transfer = 9;
}
message OptionResponse {
OptionMessage opt = 1;
string error = 2;
}
message TestDelay {
int64 time = 1;
bool from_client = 2;
}
message PublicKey {
bytes asymmetric_value = 1;
bytes symmetric_value = 2;
}
message SignedId { bytes id = 1; }
message AudioFormat {
uint32 sample_rate = 1;
uint32 channels = 2;
}
message AudioFrame { bytes data = 1; }
message Misc {
oneof union {
ChatMessage chat_message = 4;
SwitchDisplay switch_display = 5;
PermissionInfo permission_info = 6;
OptionMessage option = 7;
AudioFormat audio_format = 8;
string close_reason = 9;
bool refresh_video = 10;
OptionResponse option_response = 11;
bool video_received = 12;
}
}
message Message {
oneof union {
SignedId signed_id = 3;
PublicKey public_key = 4;
TestDelay test_delay = 5;
VideoFrame video_frame = 6;
LoginRequest login_request = 7;
LoginResponse login_response = 8;
Hash hash = 9;
MouseEvent mouse_event = 10;
AudioFrame audio_frame = 11;
CursorData cursor_data = 12;
CursorPosition cursor_position = 13;
uint64 cursor_id = 14;
KeyEvent key_event = 15;
Clipboard clipboard = 16;
FileAction file_action = 17;
FileResponse file_response = 18;
Misc misc = 19;
Cliprdr cliprdr = 20;
}
}

View File

@@ -1,171 +0,0 @@
syntax = "proto3";
package hbb;
message RegisterPeer {
string id = 1;
int32 serial = 2;
}
enum ConnType {
DEFAULT_CONN = 0;
FILE_TRANSFER = 1;
PORT_FORWARD = 2;
RDP = 3;
}
message RegisterPeerResponse { bool request_pk = 2; }
message PunchHoleRequest {
string id = 1;
NatType nat_type = 2;
string licence_key = 3;
ConnType conn_type = 4;
string token = 5;
}
message PunchHole {
bytes socket_addr = 1;
string relay_server = 2;
NatType nat_type = 3;
}
message TestNatRequest {
int32 serial = 1;
}
// per my test, uint/int has no difference in encoding, int not good for negative, use sint for negative
message TestNatResponse {
int32 port = 1;
ConfigUpdate cu = 2; // for mobile
}
enum NatType {
UNKNOWN_NAT = 0;
ASYMMETRIC = 1;
SYMMETRIC = 2;
}
message PunchHoleSent {
bytes socket_addr = 1;
string id = 2;
string relay_server = 3;
NatType nat_type = 4;
string version = 5;
}
message RegisterPk {
string id = 1;
bytes uuid = 2;
bytes pk = 3;
string old_id = 4;
}
message RegisterPkResponse {
enum Result {
OK = 0;
UUID_MISMATCH = 2;
ID_EXISTS = 3;
TOO_FREQUENT = 4;
INVALID_ID_FORMAT = 5;
NOT_SUPPORT = 6;
SERVER_ERROR = 7;
}
Result result = 1;
}
message PunchHoleResponse {
bytes socket_addr = 1;
bytes pk = 2;
enum Failure {
ID_NOT_EXIST = 0;
OFFLINE = 2;
LICENSE_MISMATCH = 3;
LICENSE_OVERUSE = 4;
}
Failure failure = 3;
string relay_server = 4;
oneof union {
NatType nat_type = 5;
bool is_local = 6;
}
string other_failure = 7;
}
message ConfigUpdate {
int32 serial = 1;
repeated string rendezvous_servers = 2;
}
message RequestRelay {
string id = 1;
string uuid = 2;
bytes socket_addr = 3;
string relay_server = 4;
bool secure = 5;
string licence_key = 6;
ConnType conn_type = 7;
string token = 8;
}
message RelayResponse {
bytes socket_addr = 1;
string uuid = 2;
string relay_server = 3;
oneof union {
string id = 4;
bytes pk = 5;
}
string refuse_reason = 6;
string version = 7;
}
message SoftwareUpdate { string url = 1; }
// if in same intranet, punch hole won't work both for udp and tcp,
// even some router has below connection error if we connect itself,
// { kind: Other, error: "could not resolve to any address" },
// so we request local address to connect.
message FetchLocalAddr {
bytes socket_addr = 1;
string relay_server = 2;
}
message LocalAddr {
bytes socket_addr = 1;
bytes local_addr = 2;
string relay_server = 3;
string id = 4;
string version = 5;
}
message PeerDiscovery {
string cmd = 1;
string mac = 2;
string id = 3;
string username = 4;
string hostname = 5;
string platform = 6;
string misc = 7;
}
message RendezvousMessage {
oneof union {
RegisterPeer register_peer = 6;
RegisterPeerResponse register_peer_response = 7;
PunchHoleRequest punch_hole_request = 8;
PunchHole punch_hole = 9;
PunchHoleSent punch_hole_sent = 10;
PunchHoleResponse punch_hole_response = 11;
FetchLocalAddr fetch_local_addr = 12;
LocalAddr local_addr = 13;
ConfigUpdate configure_update = 14;
RegisterPk register_pk = 15;
RegisterPkResponse register_pk_response = 16;
SoftwareUpdate software_update = 17;
RequestRelay request_relay = 18;
RelayResponse relay_response = 19;
TestNatRequest test_nat_request = 20;
TestNatResponse test_nat_response = 21;
PeerDiscovery peer_discovery = 22;
}
}

View File

@@ -1,274 +0,0 @@
use bytes::{Buf, BufMut, Bytes, BytesMut};
use std::io;
use tokio_util::codec::{Decoder, Encoder};
#[derive(Debug, Clone, Copy)]
pub struct BytesCodec {
state: DecodeState,
raw: bool,
max_packet_length: usize,
}
#[derive(Debug, Clone, Copy)]
enum DecodeState {
Head,
Data(usize),
}
impl BytesCodec {
pub fn new() -> Self {
Self {
state: DecodeState::Head,
raw: false,
max_packet_length: usize::MAX,
}
}
pub fn set_raw(&mut self) {
self.raw = true;
}
pub fn set_max_packet_length(&mut self, n: usize) {
self.max_packet_length = n;
}
fn decode_head(&mut self, src: &mut BytesMut) -> io::Result<Option<usize>> {
if src.is_empty() {
return Ok(None);
}
let head_len = ((src[0] & 0x3) + 1) as usize;
if src.len() < head_len {
return Ok(None);
}
let mut n = src[0] as usize;
if head_len > 1 {
n |= (src[1] as usize) << 8;
}
if head_len > 2 {
n |= (src[2] as usize) << 16;
}
if head_len > 3 {
n |= (src[3] as usize) << 24;
}
n >>= 2;
if n > self.max_packet_length {
return Err(io::Error::new(io::ErrorKind::InvalidData, "Too big packet"));
}
src.advance(head_len);
src.reserve(n);
return Ok(Some(n));
}
fn decode_data(&self, n: usize, src: &mut BytesMut) -> io::Result<Option<BytesMut>> {
if src.len() < n {
return Ok(None);
}
Ok(Some(src.split_to(n)))
}
}
impl Decoder for BytesCodec {
type Item = BytesMut;
type Error = io::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<BytesMut>, io::Error> {
if self.raw {
if !src.is_empty() {
let len = src.len();
return Ok(Some(src.split_to(len)));
} else {
return Ok(None);
}
}
let n = match self.state {
DecodeState::Head => match self.decode_head(src)? {
Some(n) => {
self.state = DecodeState::Data(n);
n
}
None => return Ok(None),
},
DecodeState::Data(n) => n,
};
match self.decode_data(n, src)? {
Some(data) => {
self.state = DecodeState::Head;
Ok(Some(data))
}
None => Ok(None),
}
}
}
impl Encoder<Bytes> for BytesCodec {
type Error = io::Error;
fn encode(&mut self, data: Bytes, buf: &mut BytesMut) -> Result<(), io::Error> {
if self.raw {
buf.reserve(data.len());
buf.put(data);
return Ok(());
}
if data.len() <= 0x3F {
buf.put_u8((data.len() << 2) as u8);
} else if data.len() <= 0x3FFF {
buf.put_u16_le((data.len() << 2) as u16 | 0x1);
} else if data.len() <= 0x3FFFFF {
let h = (data.len() << 2) as u32 | 0x2;
buf.put_u16_le((h & 0xFFFF) as u16);
buf.put_u8((h >> 16) as u8);
} else if data.len() <= 0x3FFFFFFF {
buf.put_u32_le((data.len() << 2) as u32 | 0x3);
} else {
return Err(io::Error::new(io::ErrorKind::InvalidInput, "Overflow"));
}
buf.extend(data);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_codec1() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3F, 1);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
let buf_saved = buf.clone();
assert_eq!(buf.len(), 0x3F + 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F);
assert_eq!(res[0], 1);
} else {
assert!(false);
}
let mut codec2 = BytesCodec::new();
let mut buf2 = BytesMut::new();
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
assert!(false);
}
buf2.extend(&buf_saved[0..1]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
assert!(false);
}
buf2.extend(&buf_saved[1..]);
if let Ok(Some(res)) = codec2.decode(&mut buf2) {
assert_eq!(res.len(), 0x3F);
assert_eq!(res[0], 1);
} else {
assert!(false);
}
}
#[test]
fn test_codec2() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
assert!(!codec.encode("".into(), &mut buf).is_err());
assert_eq!(buf.len(), 1);
bytes.resize(0x3F + 1, 2);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
assert_eq!(buf.len(), 0x3F + 2 + 2);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0);
} else {
assert!(false);
}
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F + 1);
assert_eq!(res[0], 2);
} else {
assert!(false);
}
}
#[test]
fn test_codec3() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3F - 1, 3);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
assert_eq!(buf.len(), 0x3F + 1 - 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F - 1);
assert_eq!(res[0], 3);
} else {
assert!(false);
}
}
#[test]
fn test_codec4() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFF, 4);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
assert_eq!(buf.len(), 0x3FFF + 2);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFF);
assert_eq!(res[0], 4);
} else {
assert!(false);
}
}
#[test]
fn test_codec5() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFFFF, 5);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
assert_eq!(buf.len(), 0x3FFFFF + 3);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFFFF);
assert_eq!(res[0], 5);
} else {
assert!(false);
}
}
#[test]
fn test_codec6() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFFFF + 1, 6);
assert!(!codec.encode(bytes.into(), &mut buf).is_err());
let buf_saved = buf.clone();
assert_eq!(buf.len(), 0x3FFFFF + 4 + 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFFFF + 1);
assert_eq!(res[0], 6);
} else {
assert!(false);
}
let mut codec2 = BytesCodec::new();
let mut buf2 = BytesMut::new();
buf2.extend(&buf_saved[0..1]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
assert!(false);
}
buf2.extend(&buf_saved[1..6]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
assert!(false);
}
buf2.extend(&buf_saved[6..]);
if let Ok(Some(res)) = codec2.decode(&mut buf2) {
assert_eq!(res.len(), 0x3FFFFF + 1);
assert_eq!(res[0], 6);
} else {
assert!(false);
}
}
}

View File

@@ -1,50 +0,0 @@
use std::cell::RefCell;
use zstd::block::{Compressor, Decompressor};
thread_local! {
static COMPRESSOR: RefCell<Compressor> = RefCell::new(Compressor::new());
static DECOMPRESSOR: RefCell<Decompressor> = RefCell::new(Decompressor::new());
}
/// The library supports regular compression levels from 1 up to ZSTD_maxCLevel(),
/// which is currently 22. Levels >= 20
/// Default level is ZSTD_CLEVEL_DEFAULT==3.
/// value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT
pub fn compress(data: &[u8], level: i32) -> Vec<u8> {
let mut out = Vec::new();
COMPRESSOR.with(|c| {
if let Ok(mut c) = c.try_borrow_mut() {
match c.compress(data, level) {
Ok(res) => out = res,
Err(err) => {
crate::log::debug!("Failed to compress: {}", err);
}
}
}
});
out
}
pub fn decompress(data: &[u8]) -> Vec<u8> {
let mut out = Vec::new();
DECOMPRESSOR.with(|d| {
if let Ok(mut d) = d.try_borrow_mut() {
const MAX: usize = 1024 * 1024 * 64;
const MIN: usize = 1024 * 1024;
let mut n = 30 * data.len();
if n > MAX {
n = MAX;
}
if n < MIN {
n = MIN;
}
match d.decompress(data, n) {
Ok(res) => out = res,
Err(err) => {
crate::log::debug!("Failed to decompress: {}", err);
}
}
}
});
out
}

View File

@@ -1,876 +0,0 @@
use crate::log;
use directories_next::ProjectDirs;
use rand::Rng;
use serde_derive::{Deserialize, Serialize};
use sodiumoxide::crypto::sign;
use std::{
collections::HashMap,
fs,
net::{IpAddr, Ipv4Addr, SocketAddr},
path::{Path, PathBuf},
sync::{Arc, Mutex, RwLock},
time::SystemTime,
};
pub const RENDEZVOUS_TIMEOUT: u64 = 12_000;
pub const CONNECT_TIMEOUT: u64 = 18_000;
pub const REG_INTERVAL: i64 = 12_000;
pub const COMPRESS_LEVEL: i32 = 3;
const SERIAL: i32 = 1;
// 128x128
#[cfg(target_os = "macos")] // 128x128 on 160x160 canvas, then shrink to 128, mac looks better with padding
pub const ICON: &str = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAMAAAD04JH5AAAAyVBMVEUAAAAAcf8Acf8Acf8Acv8Acf8Acf8Acf8Acf8AcP8Acf8Ab/8AcP8Acf////8AaP/z+f/o8v/k7v/5/v/T5f8AYP/u9v/X6f+hx/+Kuv95pP8Aef/B1/+TwP9xoP8BdP/g6P+Irv9ZmP8Bgf/E3f98q/9sn/+01f+Es/9nm/9Jif8hhv8off/M4P+syP+avP86iP/c7f+xy/9yqf9Om/9hk/9Rjv+60P99tv9fpf88lv8yjf8Tgf8deP+kvP8BiP8NeP8hkP80gP8oj2VLAAAADXRSTlMA7o7qLvnaxZ1FOxYPjH9HWgAABHJJREFUeNrtm+tW4jAQgBfwuu7MtIUWsOUiCCioIIgLiqvr+z/UHq/LJKVkmwTcc/r9E2nzlU4mSTP9lpGRkZGR8VX5cZjfL+yCEXYL+/nDH//U/Pd8DgyTy39Xbv7oIAcWyB0cqbW/sweW2NtRaj8H1sgpGOwUIAH7Bkd7YJW9dXFwAJY5WNP/cmCZQnJvzIN18on5LwfWySXlxEPYAIcad8D6PdiHDbCfIFCADVBIENiFDbCbIACKPPXrZ+cP8E6/0znvP4EymgIEravIRcTxu8HxNSJ60a8W0AYECKrlAN+YwAthCd9wm1Ug6wKzIn5SgRduXfwkqDasCjx0XFzi9PV6zwNcIuhcWBOg+ikySq8C9UD4dEKWBCoOcspvAuLHTo9sCDQiFPHotRM48j8G5gVur1FdAN2uaYEuiz7xFsgEJ2RUoMUakXuBTHHoGxQYOBhHjeUBAefEnMAowFhaLBOKuOemBBbxLRQrH2PBCgMvNCPQGMeevTb9zLrPxz2Mo+QbEaijzPUcOOHMQZkKGRAIPem39+bypREMPTkQW/oCfk866zAkiIFG4yIKRE/aAnfiSd0WrORY6pFdXQEqi9mvAQm0RIOSnoCcZ8vJoz3diCnjRk+g8VP4/fuQDJ2Lxr6WwG0gXs9aTpDzW0vgDBlVUpixR8gYk44AD8FrUKHr8JQJGgIDnoDqoALxmWPQSi9AVVzm8gKUuEPGr/QCvptwJkbSYT/TC4S8C96DGjTj86aHtAI0x2WaBIq0eSYYpRa4EsdWVVwWu9O0Aj6f6dyBMnwEraeOgSYu0wZlauzA47QCbT7DgAQSE+hZWoEBF/BBmWOewNMK3BsSqKUW4MGcWqCSVmDkbvkXGKQOwg6PAUO9oL3xXhA20yaiCjuwYygRVQlUOTWTCf2SuNJTxeFjgaHByGuAIvd8ItdPLTDhS7IuqEE1YSKVOgbayLhSFQhMzYh8hwfBs1r7c505YVIQYEdNoKwxK06MJiyrpUFHiF0NAfCQUVHoiRclIXJIR6C2fqG37pBHvcWpgwzvAtYwkR5UGV2e42UISdBJETl3mg8ouo54Rcnti1/vaT+iuUQBt500Cgo4U10BeHSkk57FB0JjWkKRMWgLUA0lLodtImAQdaMiiri3+gIAPZQoutHNsgKF1aaDMhMyIdBf8Th+Bh8MTjGWCpl5Wv43tDmnF+IUVMrcZgRoiAxhtrloYizNkZaAnF5leglbNhj0wYCAbCDvGb0mP4nib7O7ZlcYQ2m1gPtIZgVgGNNMeaVAaWR+57TrqgtUnm3sHQ+kYeE6fufUubG1ez50FXbPnWgBlgSABmN3TTcsRl2yWkHRrwbiunvk/W2+Mg1hPZplPDeXRbZzStFH15s1QIVd3UImP5z/bHpeeQLvRJ7XLFUffQIlCvqlXETQbgN9/rlYABGosv+Vi9m2Xs639YLGrZd0br+odetlvdsvbN56abfd4vbCzv9Q3v/ygoOV21A4OPpfXvH4Ai+5ZGRkZGRkbJA/t/I0QMzoMiEAAAAASUVORK5CYII=
";
#[cfg(not(target_os = "macos"))] // 128x128 no padding
pub const ICON: &str = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAMAAAD04JH5AAAA7VBMVEUAAAAAcf8Acf8Acf8Adf8Acf8Acf8AcP8Acv8AcP8Acf8Acf8Acf8Acv8Acf8Acf8Ab/8AcP8Acf8Acf8Acf/////7/f8Dc/8TfP/1+f/n8v9Hmf/u9v+Uw//Q5f9hp/8Yfv8Qev8Ld/+52P+z1f+s0f81j/8wjP8Hdf/3+/8mh/8fg//x9//h7//H4P9xsP9rrf9oq/8rif/r9P/D3v+92/+Duv9bpP/d7f/U5/9NnP8/lP8jhP/L4v/B3P+OwP9+t/95tf9Rn/8bgf/Z6v+Zx/90sv9lqf85kf+hy/9UoP+Wxf+kzP+dyP+Lvv/H4q8IAAAAFHRSTlMA+u6bB6x5XR4V0+S4i4k5N+a81W8MiAQAAAVcSURBVHjazdvpWtpAGIbhgEutdW3fL2GHsMsiq4KI+66t5384XahF/GbizJAy3j/1Ah5CJhNCxpm1vbryLRrBfxKJrq+sbjtSa5u7WIDdzTVH5PNSBAsSWfrsMJ+iWKDoJ2fW8hIWbGl55vW/YuE2XhUsb8CCr9OCJVix9G//gyWf/o6/KCyJfrbwAfAPYS0CayK/j4mbsGjrV8AXWLTrONuwasdZhVWrzgqsWnG+wap1Jwqrok4EVkUcmKhdVvBaOVnzYEY/oJpMD4mo6ONF/ZSIUsX2FZjQA7xRqUET+y/v2W/Sy59u62DCDMgdJmhqgIk7eqWQBBNWwPhmj147w8QTzTjKVsGEEBBLuzSrhIkivTF8DD/Aa6forQNMHBD/VyXkgHGfuBN5ALln1TADOnESyGCiT8L/1kILqD6Q0BEm9kkofhdSwNUJiV1jQvZ/SnthBNSaJJGZbgGJUnX+gEqCZPpsJ2T2Y/MGVBrE8eOAvCA/X8A4QXLnmEhTgIPqPAG5IQU4fhmkFOT7HAFenwIU8Jd/TUEODQIUtu1eOj/dUD9cknOTpgEDkup3YrOfVStDUomcWcBVisTiNxVw3TPpgCl4RgFFybZ/9iHmn8uS2yYBA8m7qUEu9oOEejH9gHxC+PazCHbcFM8K+gGHJNAs4z2xgnAkVHQDcnG1IzvnCSfvom7AM3EZ9voah4+KXoAvGFJHMSgqEfegF3BBTKoOVfkMMXFfJ8AT7MuXUDeOE9PWCUiKBpKOlmAP1gngH2LChw7vhJgr9YD8Hnt0BxrE27CtHnDJR4AHTX1+KFAP4Ef0LHTxN9HwlAMSbAjmoavKZ8ayakDXYAhwN3wzqgZk2UPvwRjshmeqATeCT09f3mWnEqoBGf4NxAB/moRqADuOtmDiid6KqQVcsQeOYOKW3uqqBRwL5nITj/yrlFpAVrDpTJT5llQLaLMHwshY7UDgvD+VujDC96WWWsBtSAE5FnChFnAeUkDMdAvw88EqTNT5SYXpTlgPaRQM1AIGorkolNnoUS1gJHigCX48SaoF3Asuspg4Mz0U8+FTgIkCG01V09kwBQP8xG5ofD5AXeirkPEJSUlwSVIfP5ykVQNaggvz+k7prTvVgDKF8BnUXP4kqgEe/257E8Ig7EE1gA8g2stBTz7FLxqrB3SIeYaeQ2IG6gE5l2+Cmt5MGOfP4KsGiH8DOYWOoujnDY2ALHF3810goZFOQDVBTFx9Uj7eI6bp6QTgnLjeGGq6KeJuoRUQixN3pDYWyz1Rva8XIL5UPFQZCsmG3gV7R+dieS+Jd3iHLglce7oBuCOhp3zwHLxPQpfQDvBOSKjZqUIml3ZJ6AD6AajFSZJwewWR8ZPsEY26SQDaJOMeZP23w6bTJ6kBjAJQILm9hzqm7otu4G+nhgGxIQUlPLKzL7GhbxqAboMCuN2XXd+lAL0ajAMwclV+FD6jAPEy5ghAlhfwX2FODX445gHKxyN++fs64PUHmDMAbbYN2DlKk2QaScwdgMs4SZxMv4OJJSoIIQBl2Qtk3gk4qiOUANRPJQHB+0A6j5AC4J27QQEZ4eZPAsYBXFk0N/YD7iUrxRBqALxOTzoMC3x8lCFlfkMjuz8iLfk6fzQCQgjg8q3ZEd8RzUVuKelBh96Nzcc3qelL1V+2zfRv1xc56Ino3tpdPT7cd//MspfTrD/7R6p4W4O2qLMObfnyIHvvYcrPtkZjDybW7d/eb32Bg/UlHnYXuXz5CMt8rC90sr7Uy/5iN+vL/ewveLS/5NNKwcbyR1r2a3/h8wdY+v3L2tZC5oUvW2uO1M7qyvp/Xv6/48z4CTxjJEfyjEaMAAAAAElFTkSuQmCC
";
#[cfg(target_os = "macos")]
lazy_static::lazy_static! {
pub static ref ORG: Arc<RwLock<String>> = Arc::new(RwLock::new("com.carriez".to_owned()));
}
type Size = (i32, i32, i32, i32);
lazy_static::lazy_static! {
static ref CONFIG: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::load()));
static ref CONFIG2: Arc<RwLock<Config2>> = Arc::new(RwLock::new(Config2::load()));
static ref LOCAL_CONFIG: Arc<RwLock<LocalConfig>> = Arc::new(RwLock::new(LocalConfig::load()));
pub static ref ONLINE: Arc<Mutex<HashMap<String, i64>>> = Default::default();
pub static ref PROD_RENDEZVOUS_SERVER: Arc<RwLock<String>> = Default::default();
pub static ref APP_NAME: Arc<RwLock<String>> = Arc::new(RwLock::new("RustDesk".to_owned()));
}
#[cfg(any(target_os = "android", target_os = "ios"))]
lazy_static::lazy_static! {
pub static ref APP_DIR: Arc<RwLock<String>> = Default::default();
pub static ref APP_HOME_DIR: Arc<RwLock<String>> = Default::default();
}
const CHARS: &'static [char] = &[
'2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k',
'm', 'n', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',
];
pub const RENDEZVOUS_SERVERS: &'static [&'static str] = &[
"rs-ny.rustdesk.com",
"rs-sg.rustdesk.com",
"rs-cn.rustdesk.com",
];
pub const RENDEZVOUS_PORT: i32 = 21116;
pub const RELAY_PORT: i32 = 21117;
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum NetworkType {
Direct,
ProxySocks,
}
#[derive(Debug, Default, Serialize, Deserialize, Clone, PartialEq)]
pub struct Config {
#[serde(default)]
pub id: String,
#[serde(default)]
password: String,
#[serde(default)]
salt: String,
#[serde(default)]
pub key_pair: (Vec<u8>, Vec<u8>), // sk, pk
#[serde(default)]
key_confirmed: bool,
#[serde(default)]
keys_confirmed: HashMap<String, bool>,
}
#[derive(Debug, Default, PartialEq, Serialize, Deserialize, Clone)]
pub struct Socks5Server {
#[serde(default)]
pub proxy: String,
#[serde(default)]
pub username: String,
#[serde(default)]
pub password: String,
}
// more variable configs
#[derive(Debug, Default, Serialize, Deserialize, Clone, PartialEq)]
pub struct Config2 {
#[serde(default)]
rendezvous_server: String,
#[serde(default)]
nat_type: i32,
#[serde(default)]
serial: i32,
#[serde(default)]
socks: Option<Socks5Server>,
// the other scalar value must before this
#[serde(default)]
pub options: HashMap<String, String>,
}
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub struct PeerConfig {
#[serde(default)]
pub password: Vec<u8>,
#[serde(default)]
pub size: Size,
#[serde(default)]
pub size_ft: Size,
#[serde(default)]
pub size_pf: Size,
#[serde(default)]
pub view_style: String, // original (default), scale
#[serde(default)]
pub image_quality: String,
#[serde(default)]
pub custom_image_quality: Vec<i32>,
#[serde(default)]
pub show_remote_cursor: bool,
#[serde(default)]
pub lock_after_session_end: bool,
#[serde(default)]
pub privacy_mode: bool,
#[serde(default)]
pub port_forwards: Vec<(i32, String, i32)>,
#[serde(default)]
pub direct_failures: i32,
#[serde(default)]
pub disable_audio: bool,
#[serde(default)]
pub disable_clipboard: bool,
#[serde(default)]
pub enable_file_transfer: bool,
// the other scalar value must before this
#[serde(default)]
pub options: HashMap<String, String>,
#[serde(default)]
pub info: PeerInfoSerde,
}
#[derive(Debug, PartialEq, Default, Serialize, Deserialize, Clone)]
pub struct PeerInfoSerde {
#[serde(default)]
pub username: String,
#[serde(default)]
pub hostname: String,
#[serde(default)]
pub platform: String,
}
fn patch(path: PathBuf) -> PathBuf {
if let Some(_tmp) = path.to_str() {
#[cfg(windows)]
return _tmp
.replace(
"system32\\config\\systemprofile",
"ServiceProfiles\\LocalService",
)
.into();
#[cfg(target_os = "macos")]
return _tmp.replace("Application Support", "Preferences").into();
#[cfg(target_os = "linux")]
{
if _tmp == "/root" {
if let Ok(output) = std::process::Command::new("whoami").output() {
let user = String::from_utf8_lossy(&output.stdout)
.to_string()
.trim()
.to_owned();
if user != "root" {
return format!("/home/{}", user).into();
}
}
}
}
}
path
}
impl Config2 {
fn load() -> Config2 {
Config::load_::<Config2>("2")
}
pub fn file() -> PathBuf {
Config::file_("2")
}
fn store(&self) {
Config::store_(self, "2");
}
pub fn get() -> Config2 {
return CONFIG2.read().unwrap().clone();
}
pub fn set(cfg: Config2) -> bool {
let mut lock = CONFIG2.write().unwrap();
if *lock == cfg {
return false;
}
*lock = cfg;
lock.store();
true
}
}
pub fn load_path<T: serde::Serialize + serde::de::DeserializeOwned + Default + std::fmt::Debug>(
file: PathBuf,
) -> T {
let cfg = match confy::load_path(&file) {
Ok(config) => config,
Err(err) => {
log::error!("Failed to load config: {}", err);
T::default()
}
};
cfg
}
impl Config {
fn load_<T: serde::Serialize + serde::de::DeserializeOwned + Default + std::fmt::Debug>(
suffix: &str,
) -> T {
let file = Self::file_(suffix);
log::debug!("Configuration path: {}", file.display());
let cfg = load_path(file);
if suffix.is_empty() {
log::debug!("{:?}", cfg);
}
cfg
}
fn store_<T: serde::Serialize>(config: &T, suffix: &str) {
let file = Self::file_(suffix);
if let Err(err) = confy::store_path(file, config) {
log::error!("Failed to store config: {}", err);
}
}
fn load() -> Config {
Config::load_::<Config>("")
}
fn store(&self) {
Config::store_(self, "");
}
pub fn file() -> PathBuf {
Self::file_("")
}
fn file_(suffix: &str) -> PathBuf {
let name = format!("{}{}", *APP_NAME.read().unwrap(), suffix);
Self::path(name).with_extension("toml")
}
pub fn get_home() -> PathBuf {
#[cfg(any(target_os = "android", target_os = "ios"))]
return Self::path(APP_HOME_DIR.read().unwrap().as_str());
if let Some(path) = dirs_next::home_dir() {
patch(path)
} else if let Ok(path) = std::env::current_dir() {
path
} else {
std::env::temp_dir()
}
}
pub fn path<P: AsRef<Path>>(p: P) -> PathBuf {
#[cfg(any(target_os = "android", target_os = "ios"))]
{
let mut path: PathBuf = APP_DIR.read().unwrap().clone().into();
path.push(p);
return path;
}
#[cfg(not(target_os = "macos"))]
let org = "";
#[cfg(target_os = "macos")]
let org = ORG.read().unwrap().clone();
// /var/root for root
if let Some(project) = ProjectDirs::from("", &org, &*APP_NAME.read().unwrap()) {
let mut path = patch(project.config_dir().to_path_buf());
path.push(p);
return path;
}
return "".into();
}
#[allow(unreachable_code)]
pub fn log_path() -> PathBuf {
#[cfg(target_os = "macos")]
{
if let Some(path) = dirs_next::home_dir().as_mut() {
path.push(format!("Library/Logs/{}", *APP_NAME.read().unwrap()));
return path.clone();
}
}
#[cfg(target_os = "linux")]
{
let mut path = Self::get_home();
path.push(format!(".local/share/logs/{}", *APP_NAME.read().unwrap()));
std::fs::create_dir_all(&path).ok();
return path;
}
if let Some(path) = Self::path("").parent() {
let mut path: PathBuf = path.into();
path.push("log");
return path;
}
"".into()
}
pub fn ipc_path(postfix: &str) -> String {
#[cfg(windows)]
{
// \\ServerName\pipe\PipeName
// where ServerName is either the name of a remote computer or a period, to specify the local computer.
// https://docs.microsoft.com/en-us/windows/win32/ipc/pipe-names
format!(
"\\\\.\\pipe\\{}\\query{}",
*APP_NAME.read().unwrap(),
postfix
)
}
#[cfg(not(windows))]
{
use std::os::unix::fs::PermissionsExt;
#[cfg(target_os = "android")]
let mut path: PathBuf =
format!("{}/{}", *APP_DIR.read().unwrap(), *APP_NAME.read().unwrap()).into();
#[cfg(not(target_os = "android"))]
let mut path: PathBuf = format!("/tmp/{}", *APP_NAME.read().unwrap()).into();
fs::create_dir(&path).ok();
fs::set_permissions(&path, fs::Permissions::from_mode(0o0777)).ok();
path.push(format!("ipc{}", postfix));
path.to_str().unwrap_or("").to_owned()
}
}
pub fn icon_path() -> PathBuf {
let mut path = Self::path("icons");
if fs::create_dir_all(&path).is_err() {
path = std::env::temp_dir();
}
path
}
#[inline]
pub fn get_any_listen_addr() -> SocketAddr {
SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), 0)
}
pub fn get_rendezvous_server() -> String {
let mut rendezvous_server = Self::get_option("custom-rendezvous-server");
if rendezvous_server.is_empty() {
rendezvous_server = PROD_RENDEZVOUS_SERVER.read().unwrap().clone();
}
if rendezvous_server.is_empty() {
rendezvous_server = CONFIG2.read().unwrap().rendezvous_server.clone();
}
if rendezvous_server.is_empty() {
rendezvous_server = Self::get_rendezvous_servers()
.drain(..)
.next()
.unwrap_or("".to_owned());
}
if !rendezvous_server.contains(":") {
rendezvous_server = format!("{}:{}", rendezvous_server, RENDEZVOUS_PORT);
}
rendezvous_server
}
pub fn get_rendezvous_servers() -> Vec<String> {
let s = Self::get_option("custom-rendezvous-server");
if !s.is_empty() {
return vec![s];
}
let s = PROD_RENDEZVOUS_SERVER.read().unwrap().clone();
if !s.is_empty() {
return vec![s];
}
let serial_obsolute = CONFIG2.read().unwrap().serial > SERIAL;
if serial_obsolute {
let ss: Vec<String> = Self::get_option("rendezvous-servers")
.split(",")
.filter(|x| x.contains("."))
.map(|x| x.to_owned())
.collect();
if !ss.is_empty() {
return ss;
}
}
return RENDEZVOUS_SERVERS.iter().map(|x| x.to_string()).collect();
}
pub fn reset_online() {
*ONLINE.lock().unwrap() = Default::default();
}
pub fn update_latency(host: &str, latency: i64) {
ONLINE.lock().unwrap().insert(host.to_owned(), latency);
let mut host = "".to_owned();
let mut delay = i64::MAX;
for (tmp_host, tmp_delay) in ONLINE.lock().unwrap().iter() {
if tmp_delay > &0 && tmp_delay < &delay {
delay = tmp_delay.clone();
host = tmp_host.to_string();
}
}
if !host.is_empty() {
let mut config = CONFIG2.write().unwrap();
if host != config.rendezvous_server {
log::debug!("Update rendezvous_server in config to {}", host);
log::debug!("{:?}", *ONLINE.lock().unwrap());
config.rendezvous_server = host;
config.store();
}
}
}
pub fn set_id(id: &str) {
let mut config = CONFIG.write().unwrap();
if id == config.id {
return;
}
config.id = id.into();
config.store();
}
pub fn set_nat_type(nat_type: i32) {
let mut config = CONFIG2.write().unwrap();
if nat_type == config.nat_type {
return;
}
config.nat_type = nat_type;
config.store();
}
pub fn get_nat_type() -> i32 {
CONFIG2.read().unwrap().nat_type
}
pub fn set_serial(serial: i32) {
let mut config = CONFIG2.write().unwrap();
if serial == config.serial {
return;
}
config.serial = serial;
config.store();
}
pub fn get_serial() -> i32 {
std::cmp::max(CONFIG2.read().unwrap().serial, SERIAL)
}
fn get_auto_id() -> Option<String> {
#[cfg(any(target_os = "android", target_os = "ios"))]
{
return Some(
rand::thread_rng()
.gen_range(1_000_000_000..2_000_000_000)
.to_string(),
);
}
let mut id = 0u32;
#[cfg(not(any(target_os = "android", target_os = "ios")))]
if let Ok(Some(ma)) = mac_address::get_mac_address() {
for x in &ma.bytes()[2..] {
id = (id << 8) | (*x as u32);
}
id = id & 0x1FFFFFFF;
Some(id.to_string())
} else {
None
}
}
pub fn get_auto_password() -> String {
let mut rng = rand::thread_rng();
(0..6)
.map(|_| CHARS[rng.gen::<usize>() % CHARS.len()])
.collect()
}
pub fn get_key_confirmed() -> bool {
CONFIG.read().unwrap().key_confirmed
}
pub fn set_key_confirmed(v: bool) {
let mut config = CONFIG.write().unwrap();
if config.key_confirmed == v {
return;
}
config.key_confirmed = v;
if !v {
config.keys_confirmed = Default::default();
}
config.store();
}
pub fn get_host_key_confirmed(host: &str) -> bool {
if let Some(true) = CONFIG.read().unwrap().keys_confirmed.get(host) {
true
} else {
false
}
}
pub fn set_host_key_confirmed(host: &str, v: bool) {
if Self::get_host_key_confirmed(host) == v {
return;
}
let mut config = CONFIG.write().unwrap();
config.keys_confirmed.insert(host.to_owned(), v);
config.store();
}
pub fn set_key_pair(pair: (Vec<u8>, Vec<u8>)) {
let mut config = CONFIG.write().unwrap();
if config.key_pair == pair {
return;
}
config.key_pair = pair;
config.store();
}
pub fn get_key_pair() -> (Vec<u8>, Vec<u8>) {
// lock here to make sure no gen_keypair more than once
let mut config = CONFIG.write().unwrap();
if config.key_pair.0.is_empty() {
let (pk, sk) = sign::gen_keypair();
config.key_pair = (sk.0.to_vec(), pk.0.into());
config.store();
}
config.key_pair.clone()
}
pub fn get_id() -> String {
let mut id = CONFIG.read().unwrap().id.clone();
if id.is_empty() {
if let Some(tmp) = Config::get_auto_id() {
id = tmp;
Config::set_id(&id);
}
}
id
}
pub fn get_id_or(b: String) -> String {
let a = CONFIG.read().unwrap().id.clone();
if a.is_empty() {
b
} else {
a
}
}
pub fn get_options() -> HashMap<String, String> {
CONFIG2.read().unwrap().options.clone()
}
pub fn set_options(v: HashMap<String, String>) {
let mut config = CONFIG2.write().unwrap();
if config.options == v {
return;
}
config.options = v;
config.store();
}
pub fn get_option(k: &str) -> String {
if let Some(v) = CONFIG2.read().unwrap().options.get(k) {
v.clone()
} else {
"".to_owned()
}
}
pub fn set_option(k: String, v: String) {
let mut config = CONFIG2.write().unwrap();
let v2 = if v.is_empty() { None } else { Some(&v) };
if v2 != config.options.get(&k) {
if v2.is_none() {
config.options.remove(&k);
} else {
config.options.insert(k, v);
}
config.store();
}
}
pub fn update_id() {
// to-do: how about if one ip register a lot of ids?
let id = Self::get_id();
let mut rng = rand::thread_rng();
let new_id = rng.gen_range(1_000_000_000..2_000_000_000).to_string();
Config::set_id(&new_id);
log::info!("id updated from {} to {}", id, new_id);
}
pub fn set_password(password: &str) {
let mut config = CONFIG.write().unwrap();
if password == config.password {
return;
}
config.password = password.into();
config.store();
}
pub fn get_password() -> String {
let mut password = CONFIG.read().unwrap().password.clone();
if password.is_empty() {
password = Config::get_auto_password();
Config::set_password(&password);
}
password
}
pub fn set_salt(salt: &str) {
let mut config = CONFIG.write().unwrap();
if salt == config.salt {
return;
}
config.salt = salt.into();
config.store();
}
pub fn get_salt() -> String {
let mut salt = CONFIG.read().unwrap().salt.clone();
if salt.is_empty() {
salt = Config::get_auto_password();
Config::set_salt(&salt);
}
salt
}
pub fn set_socks(socks: Option<Socks5Server>) {
let mut config = CONFIG2.write().unwrap();
if config.socks == socks {
return;
}
config.socks = socks;
config.store();
}
pub fn get_socks() -> Option<Socks5Server> {
CONFIG2.read().unwrap().socks.clone()
}
pub fn get_network_type() -> NetworkType {
match &CONFIG2.read().unwrap().socks {
None => NetworkType::Direct,
Some(_) => NetworkType::ProxySocks,
}
}
pub fn get() -> Config {
return CONFIG.read().unwrap().clone();
}
pub fn set(cfg: Config) -> bool {
let mut lock = CONFIG.write().unwrap();
if *lock == cfg {
return false;
}
*lock = cfg;
lock.store();
true
}
}
const PEERS: &str = "peers";
impl PeerConfig {
pub fn load(id: &str) -> PeerConfig {
let _ = CONFIG.read().unwrap(); // for lock
match confy::load_path(&Self::path(id)) {
Ok(config) => config,
Err(err) => {
log::error!("Failed to load config: {}", err);
Default::default()
}
}
}
pub fn store(&self, id: &str) {
let _ = CONFIG.read().unwrap(); // for lock
if let Err(err) = confy::store_path(Self::path(id), self) {
log::error!("Failed to store config: {}", err);
}
}
pub fn remove(id: &str) {
fs::remove_file(&Self::path(id)).ok();
}
fn path(id: &str) -> PathBuf {
let path: PathBuf = [PEERS, id].iter().collect();
Config::path(path).with_extension("toml")
}
pub fn peers() -> Vec<(String, SystemTime, PeerConfig)> {
if let Ok(peers) = Config::path(PEERS).read_dir() {
if let Ok(peers) = peers
.map(|res| res.map(|e| e.path()))
.collect::<Result<Vec<_>, _>>()
{
let mut peers: Vec<_> = peers
.iter()
.filter(|p| {
p.is_file()
&& p.extension().map(|p| p.to_str().unwrap_or("")) == Some("toml")
})
.map(|p| {
let t = crate::get_modified_time(&p);
let id = p
.file_stem()
.map(|p| p.to_str().unwrap_or(""))
.unwrap_or("")
.to_owned();
let c = PeerConfig::load(&id);
if c.info.platform.is_empty() {
fs::remove_file(&p).ok();
}
(id, t, c)
})
.filter(|p| !p.2.info.platform.is_empty())
.collect();
peers.sort_unstable_by(|a, b| b.1.cmp(&a.1));
return peers;
}
}
Default::default()
}
}
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub struct LocalConfig {
#[serde(default)]
remote_id: String, // latest used one
#[serde(default)]
size: Size,
#[serde(default)]
pub fav: Vec<String>,
#[serde(default)]
options: HashMap<String, String>,
}
impl LocalConfig {
fn load() -> LocalConfig {
Config::load_::<LocalConfig>("_local")
}
fn store(&self) {
Config::store_(self, "_local");
}
pub fn get_size() -> Size {
LOCAL_CONFIG.read().unwrap().size
}
pub fn set_size(x: i32, y: i32, w: i32, h: i32) {
let mut config = LOCAL_CONFIG.write().unwrap();
let size = (x, y, w, h);
if size == config.size || size.2 < 300 || size.3 < 300 {
return;
}
config.size = size;
config.store();
}
pub fn set_remote_id(remote_id: &str) {
let mut config = LOCAL_CONFIG.write().unwrap();
if remote_id == config.remote_id {
return;
}
config.remote_id = remote_id.into();
config.store();
}
pub fn get_remote_id() -> String {
LOCAL_CONFIG.read().unwrap().remote_id.clone()
}
pub fn set_fav(fav: Vec<String>) {
let mut lock = LOCAL_CONFIG.write().unwrap();
if lock.fav == fav {
return;
}
lock.fav = fav;
lock.store();
}
pub fn get_fav() -> Vec<String> {
LOCAL_CONFIG.read().unwrap().fav.clone()
}
pub fn get_option(k: &str) -> String {
if let Some(v) = LOCAL_CONFIG.read().unwrap().options.get(k) {
v.clone()
} else {
"".to_owned()
}
}
pub fn set_option(k: String, v: String) {
let mut config = LOCAL_CONFIG.write().unwrap();
let v2 = if v.is_empty() { None } else { Some(&v) };
if v2 != config.options.get(&k) {
if v2.is_none() {
config.options.remove(&k);
} else {
config.options.insert(k, v);
}
config.store();
}
}
}
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub struct LanPeers {
#[serde(default)]
pub peers: String,
}
impl LanPeers {
pub fn load() -> LanPeers {
let _ = CONFIG.read().unwrap(); // for lock
match confy::load_path(&Config::file_("_lan_peers")) {
Ok(peers) => peers,
Err(err) => {
log::error!("Failed to load lan peers: {}", err);
Default::default()
}
}
}
pub fn store(peers: String) {
let f = LanPeers { peers };
if let Err(err) = confy::store_path(Config::file_("_lan_peers"), f) {
log::error!("Failed to store lan peers: {}", err);
}
}
pub fn modify_time() -> crate::ResultType<u64> {
let p = Config::file_("_lan_peers");
Ok(fs::metadata(p)?
.modified()?
.duration_since(SystemTime::UNIX_EPOCH)?
.as_millis() as _)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_serialize() {
let cfg: Config = Default::default();
let res = toml::to_string_pretty(&cfg);
assert!(res.is_ok());
let cfg: PeerConfig = Default::default();
let res = toml::to_string_pretty(&cfg);
assert!(res.is_ok());
}
}

View File

@@ -1,560 +0,0 @@
use crate::{bail, message_proto::*, ResultType};
use std::path::{Path, PathBuf};
// https://doc.rust-lang.org/std/os/windows/fs/trait.MetadataExt.html
use crate::{
compress::{compress, decompress},
config::{Config, COMPRESS_LEVEL},
};
#[cfg(windows)]
use std::os::windows::prelude::*;
use tokio::{fs::File, io::*};
pub fn read_dir(path: &PathBuf, include_hidden: bool) -> ResultType<FileDirectory> {
let mut dir = FileDirectory {
path: get_string(&path),
..Default::default()
};
#[cfg(windows)]
if "/" == &get_string(&path) {
let drives = unsafe { winapi::um::fileapi::GetLogicalDrives() };
for i in 0..32 {
if drives & (1 << i) != 0 {
let name = format!(
"{}:",
std::char::from_u32('A' as u32 + i as u32).unwrap_or('A')
);
dir.entries.push(FileEntry {
name,
entry_type: FileType::DirDrive.into(),
..Default::default()
});
}
}
return Ok(dir);
}
for entry in path.read_dir()? {
if let Ok(entry) = entry {
let p = entry.path();
let name = p
.file_name()
.map(|p| p.to_str().unwrap_or(""))
.unwrap_or("")
.to_owned();
if name.is_empty() {
continue;
}
let mut is_hidden = false;
let meta;
if let Ok(tmp) = std::fs::symlink_metadata(&p) {
meta = tmp;
} else {
continue;
}
// docs.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants
#[cfg(windows)]
if meta.file_attributes() & 0x2 != 0 {
is_hidden = true;
}
#[cfg(not(windows))]
if name.find('.').unwrap_or(usize::MAX) == 0 {
is_hidden = true;
}
if is_hidden && !include_hidden {
continue;
}
let (entry_type, size) = {
if p.is_dir() {
if meta.file_type().is_symlink() {
(FileType::DirLink.into(), 0)
} else {
(FileType::Dir.into(), 0)
}
} else {
if meta.file_type().is_symlink() {
(FileType::FileLink.into(), 0)
} else {
(FileType::File.into(), meta.len())
}
}
};
let modified_time = meta
.modified()
.map(|x| {
x.duration_since(std::time::SystemTime::UNIX_EPOCH)
.map(|x| x.as_secs())
.unwrap_or(0)
})
.unwrap_or(0) as u64;
dir.entries.push(FileEntry {
name: get_file_name(&p),
entry_type,
is_hidden,
size,
modified_time,
..Default::default()
});
}
}
Ok(dir)
}
#[inline]
pub fn get_file_name(p: &PathBuf) -> String {
p.file_name()
.map(|p| p.to_str().unwrap_or(""))
.unwrap_or("")
.to_owned()
}
#[inline]
pub fn get_string(path: &PathBuf) -> String {
path.to_str().unwrap_or("").to_owned()
}
#[inline]
pub fn get_path(path: &str) -> PathBuf {
Path::new(path).to_path_buf()
}
#[inline]
pub fn get_home_as_string() -> String {
get_string(&Config::get_home())
}
fn read_dir_recursive(
path: &PathBuf,
prefix: &PathBuf,
include_hidden: bool,
) -> ResultType<Vec<FileEntry>> {
let mut files = Vec::new();
if path.is_dir() {
// to-do: symbol link handling, cp the link rather than the content
// to-do: file mode, for unix
let fd = read_dir(&path, include_hidden)?;
for entry in fd.entries.iter() {
match entry.entry_type.enum_value() {
Ok(FileType::File) => {
let mut entry = entry.clone();
entry.name = get_string(&prefix.join(entry.name));
files.push(entry);
}
Ok(FileType::Dir) => {
if let Ok(mut tmp) = read_dir_recursive(
&path.join(&entry.name),
&prefix.join(&entry.name),
include_hidden,
) {
for entry in tmp.drain(0..) {
files.push(entry);
}
}
}
_ => {}
}
}
Ok(files)
} else if path.is_file() {
let (size, modified_time) = if let Ok(meta) = std::fs::metadata(&path) {
(
meta.len(),
meta.modified()
.map(|x| {
x.duration_since(std::time::SystemTime::UNIX_EPOCH)
.map(|x| x.as_secs())
.unwrap_or(0)
})
.unwrap_or(0) as u64,
)
} else {
(0, 0)
};
files.push(FileEntry {
entry_type: FileType::File.into(),
size,
modified_time,
..Default::default()
});
Ok(files)
} else {
bail!("Not exists");
}
}
pub fn get_recursive_files(path: &str, include_hidden: bool) -> ResultType<Vec<FileEntry>> {
read_dir_recursive(&get_path(path), &get_path(""), include_hidden)
}
#[derive(Default)]
pub struct TransferJob {
id: i32,
path: PathBuf,
files: Vec<FileEntry>,
file_num: i32,
file: Option<File>,
total_size: u64,
finished_size: u64,
transferred: u64,
}
#[inline]
fn get_ext(name: &str) -> &str {
if let Some(i) = name.rfind(".") {
return &name[i + 1..];
}
""
}
#[inline]
fn is_compressed_file(name: &str) -> bool {
let ext = get_ext(name);
ext == "xz"
|| ext == "gz"
|| ext == "zip"
|| ext == "7z"
|| ext == "rar"
|| ext == "bz2"
|| ext == "tgz"
|| ext == "png"
|| ext == "jpg"
}
impl TransferJob {
pub fn new_write(id: i32, path: String, files: Vec<FileEntry>) -> Self {
let total_size = files.iter().map(|x| x.size as u64).sum();
Self {
id,
path: get_path(&path),
files,
total_size,
..Default::default()
}
}
pub fn new_read(id: i32, path: String, include_hidden: bool) -> ResultType<Self> {
let files = get_recursive_files(&path, include_hidden)?;
let total_size = files.iter().map(|x| x.size as u64).sum();
Ok(Self {
id,
path: get_path(&path),
files,
total_size,
..Default::default()
})
}
#[inline]
pub fn files(&self) -> &Vec<FileEntry> {
&self.files
}
#[inline]
pub fn set_files(&mut self, files: Vec<FileEntry>) {
self.files = files;
}
#[inline]
pub fn id(&self) -> i32 {
self.id
}
#[inline]
pub fn total_size(&self) -> u64 {
self.total_size
}
#[inline]
pub fn finished_size(&self) -> u64 {
self.finished_size
}
#[inline]
pub fn transferred(&self) -> u64 {
self.transferred
}
#[inline]
pub fn file_num(&self) -> i32 {
self.file_num
}
pub fn modify_time(&self) {
let file_num = self.file_num as usize;
if file_num < self.files.len() {
let entry = &self.files[file_num];
let path = self.join(&entry.name);
let download_path = format!("{}.download", get_string(&path));
std::fs::rename(&download_path, &path).ok();
filetime::set_file_mtime(
&path,
filetime::FileTime::from_unix_time(entry.modified_time as _, 0),
)
.ok();
}
}
pub fn remove_download_file(&self) {
let file_num = self.file_num as usize;
if file_num < self.files.len() {
let entry = &self.files[file_num];
let path = self.join(&entry.name);
let download_path = format!("{}.download", get_string(&path));
std::fs::remove_file(&download_path).ok();
}
}
pub async fn write(&mut self, block: FileTransferBlock, raw: Option<&[u8]>) -> ResultType<()> {
if block.id != self.id {
bail!("Wrong id");
}
let file_num = block.file_num as usize;
if file_num >= self.files.len() {
bail!("Wrong file number");
}
if file_num != self.file_num as usize || self.file.is_none() {
self.modify_time();
if let Some(file) = self.file.as_mut() {
file.sync_all().await?;
}
self.file_num = block.file_num;
let entry = &self.files[file_num];
let path = self.join(&entry.name);
if let Some(p) = path.parent() {
std::fs::create_dir_all(p).ok();
}
let path = format!("{}.download", get_string(&path));
self.file = Some(File::create(&path).await?);
}
let data = if let Some(data) = raw {
data
} else {
&block.data
};
if block.compressed {
let tmp = decompress(data);
self.file.as_mut().unwrap().write_all(&tmp).await?;
self.finished_size += tmp.len() as u64;
} else {
self.file.as_mut().unwrap().write_all(data).await?;
self.finished_size += data.len() as u64;
}
self.transferred += data.len() as u64;
Ok(())
}
#[inline]
fn join(&self, name: &str) -> PathBuf {
if name.is_empty() {
self.path.clone()
} else {
self.path.join(name)
}
}
pub async fn read(&mut self) -> ResultType<Option<FileTransferBlock>> {
let file_num = self.file_num as usize;
if file_num >= self.files.len() {
self.file.take();
return Ok(None);
}
let name = &self.files[file_num].name;
if self.file.is_none() {
match File::open(self.join(&name)).await {
Ok(file) => {
self.file = Some(file);
}
Err(err) => {
self.file_num += 1;
return Err(err.into());
}
}
}
const BUF_SIZE: usize = 128 * 1024;
let mut buf: Vec<u8> = Vec::with_capacity(BUF_SIZE);
unsafe {
buf.set_len(BUF_SIZE);
}
let mut compressed = false;
let mut offset: usize = 0;
loop {
match self.file.as_mut().unwrap().read(&mut buf[offset..]).await {
Err(err) => {
self.file_num += 1;
self.file = None;
return Err(err.into());
}
Ok(n) => {
offset += n;
if n == 0 || offset == BUF_SIZE {
break;
}
}
}
}
unsafe { buf.set_len(offset) };
if offset == 0 {
self.file_num += 1;
self.file = None;
} else {
self.finished_size += offset as u64;
if !is_compressed_file(name) {
let tmp = compress(&buf, COMPRESS_LEVEL);
if tmp.len() < buf.len() {
buf = tmp;
compressed = true;
}
}
self.transferred += buf.len() as u64;
}
Ok(Some(FileTransferBlock {
id: self.id,
file_num: file_num as _,
data: buf.into(),
compressed,
..Default::default()
}))
}
}
#[inline]
pub fn new_error<T: std::string::ToString>(id: i32, err: T, file_num: i32) -> Message {
let mut resp = FileResponse::new();
resp.set_error(FileTransferError {
id,
error: err.to_string(),
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_dir(id: i32, path: String, files: Vec<FileEntry>) -> Message {
let mut resp = FileResponse::new();
resp.set_dir(FileDirectory {
id,
path,
entries: files.into(),
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_block(block: FileTransferBlock) -> Message {
let mut resp = FileResponse::new();
resp.set_block(block);
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_receive(id: i32, path: String, files: Vec<FileEntry>) -> Message {
let mut action = FileAction::new();
action.set_receive(FileTransferReceiveRequest {
id,
path,
files: files.into(),
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_action(action);
msg_out
}
#[inline]
pub fn new_send(id: i32, path: String, include_hidden: bool) -> Message {
let mut action = FileAction::new();
action.set_send(FileTransferSendRequest {
id,
path,
include_hidden,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_action(action);
msg_out
}
#[inline]
pub fn new_done(id: i32, file_num: i32) -> Message {
let mut resp = FileResponse::new();
resp.set_done(FileTransferDone {
id,
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn remove_job(id: i32, jobs: &mut Vec<TransferJob>) {
*jobs = jobs.drain(0..).filter(|x| x.id() != id).collect();
}
#[inline]
pub fn get_job(id: i32, jobs: &mut Vec<TransferJob>) -> Option<&mut TransferJob> {
jobs.iter_mut().filter(|x| x.id() == id).next()
}
pub async fn handle_read_jobs(
jobs: &mut Vec<TransferJob>,
stream: &mut crate::Stream,
) -> ResultType<()> {
let mut finished = Vec::new();
for job in jobs.iter_mut() {
match job.read().await {
Err(err) => {
stream
.send(&new_error(job.id(), err, job.file_num()))
.await?;
}
Ok(Some(block)) => {
stream.send(&new_block(block)).await?;
}
Ok(None) => {
finished.push(job.id());
stream.send(&new_done(job.id(), job.file_num())).await?;
}
}
}
for id in finished {
remove_job(id, jobs);
}
Ok(())
}
pub fn remove_all_empty_dir(path: &PathBuf) -> ResultType<()> {
let fd = read_dir(path, true)?;
for entry in fd.entries.iter() {
match entry.entry_type.enum_value() {
Ok(FileType::Dir) => {
remove_all_empty_dir(&path.join(&entry.name)).ok();
}
Ok(FileType::DirLink) | Ok(FileType::FileLink) => {
std::fs::remove_file(&path.join(&entry.name)).ok();
}
_ => {}
}
}
std::fs::remove_dir(path).ok();
Ok(())
}
#[inline]
pub fn remove_file(file: &str) -> ResultType<()> {
std::fs::remove_file(get_path(file))?;
Ok(())
}
#[inline]
pub fn create_dir(dir: &str) -> ResultType<()> {
std::fs::create_dir_all(get_path(dir))?;
Ok(())
}

View File

@@ -1,211 +0,0 @@
pub mod compress;
#[path = "./protos/message.rs"]
pub mod message_proto;
#[path = "./protos/rendezvous.rs"]
pub mod rendezvous_proto;
pub use bytes;
pub use futures;
pub use protobuf;
use std::{
fs::File,
io::{self, BufRead},
net::{Ipv4Addr, SocketAddr, SocketAddrV4},
path::Path,
time::{self, SystemTime, UNIX_EPOCH},
};
pub use tokio;
pub use tokio_util;
pub mod socket_client;
pub mod tcp;
pub mod udp;
pub use env_logger;
pub use log;
pub mod bytes_codec;
#[cfg(feature = "quic")]
pub mod quic;
pub use anyhow::{self, bail};
pub use futures_util;
pub mod config;
pub mod fs;
#[cfg(not(any(target_os = "android", target_os = "ios")))]
pub use mac_address;
pub use rand;
pub use regex;
pub use sodiumoxide;
pub use tokio_socks;
pub use tokio_socks::IntoTargetAddr;
pub use tokio_socks::TargetAddr;
pub use lazy_static;
#[cfg(feature = "quic")]
pub type Stream = quic::Connection;
#[cfg(not(feature = "quic"))]
pub type Stream = tcp::FramedStream;
#[inline]
pub async fn sleep(sec: f32) {
tokio::time::sleep(time::Duration::from_secs_f32(sec)).await;
}
#[macro_export]
macro_rules! allow_err {
($e:expr) => {
if let Err(err) = $e {
log::debug!(
"{:?}, {}:{}:{}:{}",
err,
module_path!(),
file!(),
line!(),
column!()
);
} else {
}
};
}
#[inline]
pub fn timeout<T: std::future::Future>(ms: u64, future: T) -> tokio::time::Timeout<T> {
tokio::time::timeout(std::time::Duration::from_millis(ms), future)
}
pub type ResultType<F, E = anyhow::Error> = anyhow::Result<F, E>;
/// Certain router and firewalls scan the packet and if they
/// find an IP address belonging to their pool that they use to do the NAT mapping/translation, so here we mangle the ip address
pub struct AddrMangle();
impl AddrMangle {
pub fn encode(addr: SocketAddr) -> Vec<u8> {
match addr {
SocketAddr::V4(addr_v4) => {
let tm = (SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_micros() as u32) as u128;
let ip = u32::from_le_bytes(addr_v4.ip().octets()) as u128;
let port = addr.port() as u128;
let v = ((ip + tm) << 49) | (tm << 17) | (port + (tm & 0xFFFF));
let bytes = v.to_le_bytes();
let mut n_padding = 0;
for i in bytes.iter().rev() {
if i == &0u8 {
n_padding += 1;
} else {
break;
}
}
bytes[..(16 - n_padding)].to_vec()
}
_ => {
panic!("Only support ipv4");
}
}
}
pub fn decode(bytes: &[u8]) -> SocketAddr {
let mut padded = [0u8; 16];
padded[..bytes.len()].copy_from_slice(&bytes);
let number = u128::from_le_bytes(padded);
let tm = (number >> 17) & (u32::max_value() as u128);
let ip = (((number >> 49) - tm) as u32).to_le_bytes();
let port = (number & 0xFFFFFF) - (tm & 0xFFFF);
SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(ip[0], ip[1], ip[2], ip[3]),
port as u16,
))
}
}
pub fn get_version_from_url(url: &str) -> String {
let n = url.chars().count();
let a = url
.chars()
.rev()
.enumerate()
.filter(|(_, x)| x == &'-')
.next()
.map(|(i, _)| i);
if let Some(a) = a {
let b = url
.chars()
.rev()
.enumerate()
.filter(|(_, x)| x == &'.')
.next()
.map(|(i, _)| i);
if let Some(b) = b {
if a > b {
if url
.chars()
.skip(n - b)
.collect::<String>()
.parse::<i32>()
.is_ok()
{
return url.chars().skip(n - a).collect();
} else {
return url.chars().skip(n - a).take(a - b - 1).collect();
}
} else {
return url.chars().skip(n - a).collect();
}
}
}
"".to_owned()
}
pub fn gen_version() {
let mut file = File::create("./src/version.rs").unwrap();
for line in read_lines("Cargo.toml").unwrap() {
if let Ok(line) = line {
let ab: Vec<&str> = line.split("=").map(|x| x.trim()).collect();
if ab.len() == 2 && ab[0] == "version" {
use std::io::prelude::*;
file.write_all(format!("pub const VERSION: &str = {};", ab[1]).as_bytes())
.ok();
file.sync_all().ok();
break;
}
}
}
}
fn read_lines<P>(filename: P) -> io::Result<io::Lines<io::BufReader<File>>>
where
P: AsRef<Path>,
{
let file = File::open(filename)?;
Ok(io::BufReader::new(file).lines())
}
pub fn is_valid_custom_id(id: &str) -> bool {
regex::Regex::new(r"^[a-zA-Z]\w{5,15}$")
.unwrap()
.is_match(id)
}
pub fn get_version_number(v: &str) -> i64 {
let mut n = 0;
for x in v.split(".") {
n = n * 1000 + x.parse::<i64>().unwrap_or(0);
}
n
}
pub fn get_modified_time(path: &std::path::Path) -> SystemTime {
std::fs::metadata(&path)
.map(|m| m.modified().unwrap_or(UNIX_EPOCH))
.unwrap_or(UNIX_EPOCH)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_mangle() {
let addr = SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(192, 168, 16, 32), 21116));
assert_eq!(addr, AddrMangle::decode(&AddrMangle::encode(addr)));
}
}

View File

@@ -1,135 +0,0 @@
use crate::{allow_err, anyhow::anyhow, ResultType};
use protobuf::Message;
use std::{net::SocketAddr, sync::Arc};
use tokio::{self, stream::StreamExt, sync::mpsc};
const QUIC_HBB: &[&[u8]] = &[b"hbb"];
const SERVER_NAME: &str = "hbb";
type Sender = mpsc::UnboundedSender<Value>;
type Receiver = mpsc::UnboundedReceiver<Value>;
pub fn new_server(socket: std::net::UdpSocket) -> ResultType<(Server, SocketAddr)> {
let mut transport_config = quinn::TransportConfig::default();
transport_config.stream_window_uni(0);
let mut server_config = quinn::ServerConfig::default();
server_config.transport = Arc::new(transport_config);
let mut server_config = quinn::ServerConfigBuilder::new(server_config);
server_config.protocols(QUIC_HBB);
// server_config.enable_keylog();
// server_config.use_stateless_retry(true);
let mut endpoint = quinn::Endpoint::builder();
endpoint.listen(server_config.build());
let (end, incoming) = endpoint.with_socket(socket)?;
Ok((Server { incoming }, end.local_addr()?))
}
pub async fn new_client(local_addr: &SocketAddr, peer: &SocketAddr) -> ResultType<Connection> {
let mut endpoint = quinn::Endpoint::builder();
let mut client_config = quinn::ClientConfigBuilder::default();
client_config.protocols(QUIC_HBB);
//client_config.enable_keylog();
endpoint.default_client_config(client_config.build());
let (endpoint, _) = endpoint.bind(local_addr)?;
let new_conn = endpoint.connect(peer, SERVER_NAME)?.await?;
Connection::new_for_client(new_conn.connection).await
}
pub struct Server {
incoming: quinn::Incoming,
}
impl Server {
#[inline]
pub async fn next(&mut self) -> ResultType<Option<Connection>> {
Connection::new_for_server(&mut self.incoming).await
}
}
pub struct Connection {
conn: quinn::Connection,
tx: quinn::SendStream,
rx: Receiver,
}
type Value = ResultType<Vec<u8>>;
impl Connection {
async fn new_for_server(incoming: &mut quinn::Incoming) -> ResultType<Option<Self>> {
if let Some(conn) = incoming.next().await {
let quinn::NewConnection {
connection: conn,
// uni_streams,
mut bi_streams,
..
} = conn.await?;
let (tx, rx) = mpsc::unbounded_channel::<Value>();
tokio::spawn(async move {
loop {
let stream = bi_streams.next().await;
if let Some(stream) = stream {
let stream = match stream {
Err(e) => {
tx.send(Err(e.into())).ok();
break;
}
Ok(s) => s,
};
let cloned = tx.clone();
tokio::spawn(async move {
allow_err!(handle_request(stream.1, cloned).await);
});
} else {
tx.send(Err(anyhow!("Reset by the peer"))).ok();
break;
}
}
log::info!("Exit connection outer loop");
});
let tx = conn.open_uni().await?;
Ok(Some(Self { conn, tx, rx }))
} else {
Ok(None)
}
}
async fn new_for_client(conn: quinn::Connection) -> ResultType<Self> {
let (tx, rx_quic) = conn.open_bi().await?;
let (tx_mpsc, rx) = mpsc::unbounded_channel::<Value>();
tokio::spawn(async move {
allow_err!(handle_request(rx_quic, tx_mpsc).await);
});
Ok(Self { conn, tx, rx })
}
#[inline]
pub async fn next(&mut self) -> Option<Value> {
// None is returned when all Sender halves have dropped,
// indicating that no further values can be sent on the channel.
self.rx.recv().await
}
#[inline]
pub fn remote_address(&self) -> SocketAddr {
self.conn.remote_address()
}
#[inline]
pub async fn send_raw(&mut self, bytes: &[u8]) -> ResultType<()> {
self.tx.write_all(bytes).await?;
Ok(())
}
#[inline]
pub async fn send(&mut self, msg: &dyn Message) -> ResultType<()> {
match msg.write_to_bytes() {
Ok(bytes) => self.send_raw(&bytes).await?,
err => allow_err!(err),
}
Ok(())
}
}
async fn handle_request(rx: quinn::RecvStream, tx: Sender) -> ResultType<()> {
Ok(())
}

View File

@@ -1,91 +0,0 @@
use crate::{
config::{Config, NetworkType},
tcp::FramedStream,
udp::FramedSocket,
ResultType,
};
use anyhow::Context;
use std::net::SocketAddr;
use tokio::net::ToSocketAddrs;
use tokio_socks::{IntoTargetAddr, TargetAddr};
fn to_socket_addr(host: &str) -> ResultType<SocketAddr> {
use std::net::ToSocketAddrs;
host.to_socket_addrs()?.next().context("Failed to solve")
}
pub fn get_target_addr(host: &str) -> ResultType<TargetAddr<'static>> {
let addr = match Config::get_network_type() {
NetworkType::Direct => to_socket_addr(&host)?.into_target_addr()?,
NetworkType::ProxySocks => host.into_target_addr()?,
}
.to_owned();
Ok(addr)
}
pub fn test_if_valid_server(host: &str) -> String {
let mut host = host.to_owned();
if !host.contains(":") {
host = format!("{}:{}", host, 0);
}
match Config::get_network_type() {
NetworkType::Direct => match to_socket_addr(&host) {
Err(err) => err.to_string(),
Ok(_) => "".to_owned(),
},
NetworkType::ProxySocks => match &host.into_target_addr() {
Err(err) => err.to_string(),
Ok(_) => "".to_owned(),
},
}
}
pub async fn connect_tcp<'t, T: IntoTargetAddr<'t>>(
target: T,
local: SocketAddr,
ms_timeout: u64,
) -> ResultType<FramedStream> {
let target_addr = target.into_target_addr()?;
if let Some(conf) = Config::get_socks() {
FramedStream::connect(
conf.proxy.as_str(),
target_addr,
local,
conf.username.as_str(),
conf.password.as_str(),
ms_timeout,
)
.await
} else {
let addr = std::net::ToSocketAddrs::to_socket_addrs(&target_addr)?
.next()
.context("Invalid target addr")?;
Ok(FramedStream::new(addr, local, ms_timeout).await?)
}
}
pub async fn new_udp<T: ToSocketAddrs>(local: T, ms_timeout: u64) -> ResultType<FramedSocket> {
match Config::get_socks() {
None => Ok(FramedSocket::new(local).await?),
Some(conf) => {
let socket = FramedSocket::new_proxy(
conf.proxy.as_str(),
local,
conf.username.as_str(),
conf.password.as_str(),
ms_timeout,
)
.await?;
Ok(socket)
}
}
}
pub async fn rebind_udp<T: ToSocketAddrs>(local: T) -> ResultType<Option<FramedSocket>> {
match Config::get_network_type() {
NetworkType::Direct => Ok(Some(FramedSocket::new(local).await?)),
_ => Ok(None),
}
}

View File

@@ -1,285 +0,0 @@
use crate::{bail, bytes_codec::BytesCodec, ResultType};
use bytes::{BufMut, Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use protobuf::Message;
use sodiumoxide::crypto::secretbox::{self, Key, Nonce};
use std::{
io::{self, Error, ErrorKind},
net::SocketAddr,
ops::{Deref, DerefMut},
pin::Pin,
task::{Context, Poll},
};
use tokio::{
io::{AsyncRead, AsyncWrite, ReadBuf},
net::{lookup_host, TcpListener, TcpSocket, ToSocketAddrs},
};
use tokio_socks::{tcp::Socks5Stream, IntoTargetAddr, ToProxyAddrs};
use tokio_util::codec::Framed;
pub trait TcpStreamTrait: AsyncRead + AsyncWrite + Unpin {}
pub struct DynTcpStream(Box<dyn TcpStreamTrait + Send + Sync>);
pub struct FramedStream(
Framed<DynTcpStream, BytesCodec>,
SocketAddr,
Option<(Key, u64, u64)>,
u64,
);
impl Deref for FramedStream {
type Target = Framed<DynTcpStream, BytesCodec>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for FramedStream {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl Deref for DynTcpStream {
type Target = Box<dyn TcpStreamTrait + Send + Sync>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for DynTcpStream {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
fn new_socket(addr: std::net::SocketAddr, reuse: bool) -> Result<TcpSocket, std::io::Error> {
let socket = match addr {
std::net::SocketAddr::V4(..) => TcpSocket::new_v4()?,
std::net::SocketAddr::V6(..) => TcpSocket::new_v6()?,
};
if reuse {
// windows has no reuse_port, but it's reuse_address
// almost equals to unix's reuse_port + reuse_address,
// though may introduce nondeterministic behavior
#[cfg(unix)]
socket.set_reuseport(true)?;
socket.set_reuseaddr(true)?;
}
socket.bind(addr)?;
Ok(socket)
}
impl FramedStream {
pub async fn new<T1: ToSocketAddrs, T2: ToSocketAddrs>(
remote_addr: T1,
local_addr: T2,
ms_timeout: u64,
) -> ResultType<Self> {
for local_addr in lookup_host(&local_addr).await? {
for remote_addr in lookup_host(&remote_addr).await? {
let stream = super::timeout(
ms_timeout,
new_socket(local_addr, true)?.connect(remote_addr),
)
.await??;
stream.set_nodelay(true).ok();
let addr = stream.local_addr()?;
return Ok(Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
));
}
}
bail!("could not resolve to any address");
}
pub async fn connect<'a, 't, P, T1, T2>(
proxy: P,
target: T1,
local: T2,
username: &'a str,
password: &'a str,
ms_timeout: u64,
) -> ResultType<Self>
where
P: ToProxyAddrs,
T1: IntoTargetAddr<'t>,
T2: ToSocketAddrs,
{
if let Some(local) = lookup_host(&local).await?.next() {
if let Some(proxy) = proxy.to_proxy_addrs().next().await {
let stream =
super::timeout(ms_timeout, new_socket(local, true)?.connect(proxy?)).await??;
stream.set_nodelay(true).ok();
let stream = if username.trim().is_empty() {
super::timeout(
ms_timeout,
Socks5Stream::connect_with_socket(stream, target),
)
.await??
} else {
super::timeout(
ms_timeout,
Socks5Stream::connect_with_password_and_socket(
stream, target, username, password,
),
)
.await??
};
let addr = stream.local_addr()?;
return Ok(Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
));
};
};
bail!("could not resolve to any address");
}
pub fn local_addr(&self) -> SocketAddr {
self.1
}
pub fn set_send_timeout(&mut self, ms: u64) {
self.3 = ms;
}
pub fn from(stream: impl TcpStreamTrait + Send + Sync + 'static, addr: SocketAddr) -> Self {
Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
)
}
pub fn set_raw(&mut self) {
self.0.codec_mut().set_raw();
self.2 = None;
}
pub fn is_secured(&self) -> bool {
self.2.is_some()
}
#[inline]
pub async fn send(&mut self, msg: &impl Message) -> ResultType<()> {
self.send_raw(msg.write_to_bytes()?).await
}
#[inline]
pub async fn send_raw(&mut self, msg: Vec<u8>) -> ResultType<()> {
let mut msg = msg;
if let Some(key) = self.2.as_mut() {
key.1 += 1;
let nonce = Self::get_nonce(key.1);
msg = secretbox::seal(&msg, &nonce, &key.0);
}
self.send_bytes(bytes::Bytes::from(msg)).await?;
Ok(())
}
#[inline]
pub async fn send_bytes(&mut self, bytes: Bytes) -> ResultType<()> {
if self.3 > 0 {
super::timeout(self.3, self.0.send(bytes)).await??;
} else {
self.0.send(bytes).await?;
}
Ok(())
}
#[inline]
pub async fn next(&mut self) -> Option<Result<BytesMut, Error>> {
let mut res = self.0.next().await;
if let Some(key) = self.2.as_mut() {
if let Some(Ok(bytes)) = res.as_mut() {
key.2 += 1;
let nonce = Self::get_nonce(key.2);
match secretbox::open(&bytes, &nonce, &key.0) {
Ok(res) => {
bytes.clear();
bytes.put_slice(&res);
}
Err(()) => {
return Some(Err(Error::new(ErrorKind::Other, "decryption error")));
}
}
}
}
res
}
#[inline]
pub async fn next_timeout(&mut self, ms: u64) -> Option<Result<BytesMut, Error>> {
if let Ok(res) = super::timeout(ms, self.next()).await {
res
} else {
None
}
}
pub fn set_key(&mut self, key: Key) {
self.2 = Some((key, 0, 0));
}
fn get_nonce(seqnum: u64) -> Nonce {
let mut nonce = Nonce([0u8; secretbox::NONCEBYTES]);
nonce.0[..std::mem::size_of_val(&seqnum)].copy_from_slice(&seqnum.to_le_bytes());
nonce
}
}
const DEFAULT_BACKLOG: u32 = 128;
#[allow(clippy::never_loop)]
pub async fn new_listener<T: ToSocketAddrs>(addr: T, reuse: bool) -> ResultType<TcpListener> {
if !reuse {
Ok(TcpListener::bind(addr).await?)
} else {
for addr in lookup_host(&addr).await? {
let socket = new_socket(addr, true)?;
return Ok(socket.listen(DEFAULT_BACKLOG)?);
}
bail!("could not resolve to any address");
}
}
impl Unpin for DynTcpStream {}
impl AsyncRead for DynTcpStream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<()>> {
AsyncRead::poll_read(Pin::new(&mut self.0), cx, buf)
}
}
impl AsyncWrite for DynTcpStream {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
AsyncWrite::poll_write(Pin::new(&mut self.0), cx, buf)
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
AsyncWrite::poll_flush(Pin::new(&mut self.0), cx)
}
fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
AsyncWrite::poll_shutdown(Pin::new(&mut self.0), cx)
}
}
impl<R: AsyncRead + AsyncWrite + Unpin> TcpStreamTrait for R {}

View File

@@ -1,165 +0,0 @@
use crate::{bail, ResultType};
use anyhow::anyhow;
use bytes::{Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use protobuf::Message;
use socket2::{Domain, Socket, Type};
use std::net::SocketAddr;
use tokio::net::{ToSocketAddrs, UdpSocket};
use tokio_socks::{udp::Socks5UdpFramed, IntoTargetAddr, TargetAddr, ToProxyAddrs};
use tokio_util::{codec::BytesCodec, udp::UdpFramed};
pub enum FramedSocket {
Direct(UdpFramed<BytesCodec>),
ProxySocks(Socks5UdpFramed),
}
fn new_socket(addr: SocketAddr, reuse: bool, buf_size: usize) -> Result<Socket, std::io::Error> {
let socket = match addr {
SocketAddr::V4(..) => Socket::new(Domain::ipv4(), Type::dgram(), None),
SocketAddr::V6(..) => Socket::new(Domain::ipv6(), Type::dgram(), None),
}?;
if reuse {
// windows has no reuse_port, but it's reuse_address
// almost equals to unix's reuse_port + reuse_address,
// though may introduce nondeterministic behavior
#[cfg(unix)]
socket.set_reuse_port(true)?;
socket.set_reuse_address(true)?;
}
if buf_size > 0 {
socket.set_recv_buffer_size(buf_size).ok();
}
log::info!(
"Receive buf size of udp {}: {:?}",
addr,
socket.recv_buffer_size()
);
socket.bind(&addr.into())?;
Ok(socket)
}
impl FramedSocket {
pub async fn new<T: ToSocketAddrs>(addr: T) -> ResultType<Self> {
let socket = UdpSocket::bind(addr).await?;
Ok(Self::Direct(UdpFramed::new(socket, BytesCodec::new())))
}
#[allow(clippy::never_loop)]
pub async fn new_reuse<T: std::net::ToSocketAddrs>(addr: T) -> ResultType<Self> {
for addr in addr.to_socket_addrs()? {
let socket = new_socket(addr, true, 0)?.into_udp_socket();
return Ok(Self::Direct(UdpFramed::new(
UdpSocket::from_std(socket)?,
BytesCodec::new(),
)));
}
bail!("could not resolve to any address");
}
pub async fn new_with_buf_size<T: std::net::ToSocketAddrs>(
addr: T,
buf_size: usize,
) -> ResultType<Self> {
for addr in addr.to_socket_addrs()? {
return Ok(Self::Direct(UdpFramed::new(
UdpSocket::from_std(new_socket(addr, false, buf_size)?.into_udp_socket())?,
BytesCodec::new(),
)));
}
bail!("could not resolve to any address");
}
pub async fn new_proxy<'a, 't, P: ToProxyAddrs, T: ToSocketAddrs>(
proxy: P,
local: T,
username: &'a str,
password: &'a str,
ms_timeout: u64,
) -> ResultType<Self> {
let framed = if username.trim().is_empty() {
super::timeout(ms_timeout, Socks5UdpFramed::connect(proxy, Some(local))).await??
} else {
super::timeout(
ms_timeout,
Socks5UdpFramed::connect_with_password(proxy, Some(local), username, password),
)
.await??
};
log::trace!(
"Socks5 udp connected, local addr: {:?}, target addr: {}",
framed.local_addr(),
framed.socks_addr()
);
Ok(Self::ProxySocks(framed))
}
#[inline]
pub async fn send(
&mut self,
msg: &impl Message,
addr: impl IntoTargetAddr<'_>,
) -> ResultType<()> {
let addr = addr.into_target_addr()?.to_owned();
let send_data = Bytes::from(msg.write_to_bytes()?);
let _ = match self {
Self::Direct(f) => match addr {
TargetAddr::Ip(addr) => f.send((send_data, addr)).await?,
_ => {}
},
Self::ProxySocks(f) => f.send((send_data, addr)).await?,
};
Ok(())
}
// https://stackoverflow.com/a/68733302/1926020
#[inline]
pub async fn send_raw(
&mut self,
msg: &'static [u8],
addr: impl IntoTargetAddr<'static>,
) -> ResultType<()> {
let addr = addr.into_target_addr()?.to_owned();
let _ = match self {
Self::Direct(f) => match addr {
TargetAddr::Ip(addr) => f.send((Bytes::from(msg), addr)).await?,
_ => {}
},
Self::ProxySocks(f) => f.send((Bytes::from(msg), addr)).await?,
};
Ok(())
}
#[inline]
pub async fn next(&mut self) -> Option<ResultType<(BytesMut, TargetAddr<'static>)>> {
match self {
Self::Direct(f) => match f.next().await {
Some(Ok((data, addr))) => {
Some(Ok((data, addr.into_target_addr().ok()?.to_owned())))
}
Some(Err(e)) => Some(Err(anyhow!(e))),
None => None,
},
Self::ProxySocks(f) => match f.next().await {
Some(Ok((data, _))) => Some(Ok((data.data, data.dst_addr))),
Some(Err(e)) => Some(Err(anyhow!(e))),
None => None,
},
}
}
#[inline]
pub async fn next_timeout(
&mut self,
ms: u64,
) -> Option<ResultType<(BytesMut, TargetAddr<'static>)>> {
if let Ok(res) =
tokio::time::timeout(std::time::Duration::from_millis(ms), self.next()).await
{
res
} else {
None
}
}
}

65
rcd/rustdesk-hbbr Normal file
View File

@@ -0,0 +1,65 @@
#!/bin/sh
# PROVIDE: rustdesk_hbbr
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# rustdesk_hbbr_enable (bool): Set to NO by default.
# Set it to YES to enable rustdesk_hbbr.
# rustdesk_hbbr_args (string): Set extra arguments to pass to rustdesk_hbbr
# Default is "-k _".
# rustdesk_hbbr_user (string): Set user that rustdesk_hbbr will run under
# Default is "root".
# rustdesk_hbbr_group (string): Set group that rustdesk_hbbr will run under
# Default is "wheel".
. /etc/rc.subr
name=rustdesk_hbbr
desc="Rustdesk Relay Server"
rcvar=rustdesk_hbbr_enable
load_rc_config $name
: ${rustdesk_hbbr_enable:=NO}
: ${rustdesk_hbbr_args="-k _"}
: ${rustdesk_hbbr_user:=rustdesk}
: ${rustdesk_hbbr_group:=rustdesk}
pidfile=/var/run/rustdesk_hbbr.pid
command=/usr/sbin/daemon
procname=/usr/local/sbin/hbbr
rustdesk_hbbr_chdir=/var/db/rustdesk-server
command_args="-p ${pidfile} -o /var/log/rustdesk-hbbr.log ${procname} ${rustdesk_hbbr_args}"
## If you want the daemon do its log over syslog comment out the above line and remove the comment from the below replacement
#command_args="-p ${pidfile} -T ${name} ${procname} ${rustdesk_hbbr_args}"
start_precmd=rustdesk_hbbr_startprecmd
rustdesk_hbbr_startprecmd()
{
if [ -e ${pidfile} ]; then
chown ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${pidfile};
else
install -o ${rustdesk_hbbr_user} -g ${rustdesk_hbbr_group} /dev/null ${pidfile};
fi
if [ -e ${rustdesk_hbbr_chdir} ]; then
chown -R ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${rustdesk_hbbr_chdir};
chmod -R 770 ${rustdesk_hbbr_chdir};
else
mkdir -m 770 ${rustdesk_hbbr_chdir};
chown ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${rustdesk_hbbr_chdir};
fi
if [ -e /var/log/rustdesk-hbbr.log ]; then
chown -R ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} /var/log/rustdesk-hbbr.log;
chmod 660 /var/log/rustdesk-hbbr.log;
else
install -o ${rustdesk_hbbr_user} -g ${rustdesk_hbbr_group} /dev/null /var/log/rustdesk-hbbr.log;
chmod 660 /var/log/rustdesk-hbbr.log;
fi
}
run_rc_command "$1"

68
rcd/rustdesk-hbbs Normal file
View File

@@ -0,0 +1,68 @@
#!/bin/sh
# PROVIDE: rustdesk_hbbs
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# rustdesk_hbbs_enable (bool): Set to NO by default.
# Set it to YES to enable rustdesk_hbbs.
# rustdesk_hbbs_ip (string): Set IP address/hostname of relay server to use
# Defaults to "127.0.0.1", please replace with your server hostname/IP.
# rustdesk_hbbs_args (string): Set extra arguments to pass to rustdesk_hbbs
# Default is "-r ${rustdesk_hbbs_ip} -k _".
# rustdesk_hbbs_user (string): Set user that rustdesk_hbbs will run under
# Default is "root".
# rustdesk_hbbs_group (string): Set group that rustdesk_hbbs will run under
# Default is "wheel".
. /etc/rc.subr
name=rustdesk_hbbs
desc="Rustdesk ID/Rendezvous Server"
rcvar=rustdesk_hbbs_enable
load_rc_config $name
: ${rustdesk_hbbs_enable:=NO}
: ${rustdesk_hbbs_ip:=127.0.0.1}
: ${rustdesk_hbbs_args="-r ${rustdesk_hbbs_ip} -k _"}
: ${rustdesk_hbbs_user:=rustdesk}
: ${rustdesk_hbbs_group:=rustdesk}
pidfile=/var/run/rustdesk_hbbs.pid
command=/usr/sbin/daemon
procname=/usr/local/sbin/hbbs
rustdesk_hbbs_chdir=/var/db/rustdesk-server
command_args="-p ${pidfile} -o /var/log/rustdesk-hbbs.log ${procname} ${rustdesk_hbbs_args}"
## If you want the daemon to do its log over syslog, comment out the above line and remove the comment from the below replacement
#command_args="-p ${pidfile} -T ${name} ${procname} ${rustdesk_hbbs_args}"
start_precmd=rustdesk_hbbs_startprecmd
rustdesk_hbbs_startprecmd()
{
if [ -e ${pidfile} ]; then
chown ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${pidfile};
else
install -o ${rustdesk_hbbs_user} -g ${rustdesk_hbbs_group} /dev/null ${pidfile};
fi
if [ -e ${rustdesk_hbbs_chdir} ]; then
chown -R ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${rustdesk_hbbs_chdir};
chmod -R 770 ${rustdesk_hbbs_chdir};
else
mkdir -m 770 ${rustdesk_hbbs_chdir};
chown ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${rustdesk_hbbs_chdir};
fi
if [ -e /var/log/rustdesk-hbbs.log ]; then
chown -R ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} /var/log/rustdesk-hbbs.log;
chmod 660 /var/log/rustdesk-hbbs.log;
else
install -o ${rustdesk_hbbs_user} -g ${rustdesk_hbbs_group} /dev/null /var/log/rustdesk-hbbs.log;
chmod 660 /var/log/rustdesk-hbbs.log;
fi
}
run_rc_command "$1"

View File

@@ -1,24 +1,27 @@
use clap::App;
use hbb_common::{anyhow::Context, log, ResultType};
use hbb_common::{
allow_err, anyhow::{Context, Result}, get_version_number, log, tokio, ResultType
};
use ini::Ini;
use sodiumoxide::crypto::sign;
use std::{
collections::HashMap,
io::prelude::*,
io::Read,
net::{IpAddr, SocketAddr},
net::SocketAddr,
time::{Instant, SystemTime},
};
#[allow(dead_code)]
pub(crate) fn get_expired_time() -> Instant {
let now = Instant::now();
now.checked_sub(std::time::Duration::from_secs(3600))
.unwrap_or(now)
}
#[allow(dead_code)]
pub(crate) fn test_if_valid_server(host: &str, name: &str) -> ResultType<SocketAddr> {
use std::net::ToSocketAddrs;
let res = if host.contains(":") {
let res = if host.contains(':') {
host.to_socket_addrs()?.next().context("")
} else {
format!("{}:{}", host, 0)
@@ -32,9 +35,10 @@ pub(crate) fn test_if_valid_server(host: &str, name: &str) -> ResultType<SocketA
res
}
#[allow(dead_code)]
pub(crate) fn get_servers(s: &str, tag: &str) -> Vec<String> {
let servers: Vec<String> = s
.split(",")
.split(',')
.filter(|x| !x.is_empty() && test_if_valid_server(x, tag).is_ok())
.map(|x| x.to_owned())
.collect();
@@ -42,17 +46,19 @@ pub(crate) fn get_servers(s: &str, tag: &str) -> Vec<String> {
servers
}
#[allow(dead_code)]
#[inline]
fn arg_name(name: &str) -> String {
name.to_uppercase().replace("_", "-")
name.to_uppercase().replace('_', "-")
}
#[allow(dead_code)]
pub fn init_args(args: &str, name: &str, about: &str) {
let matches = App::new(name)
.version(crate::version::VERSION)
.author("Purslane Ltd. <info@rustdesk.com>")
.about(about)
.args_from_usage(&args)
.args_from_usage(args)
.get_matches();
if let Ok(v) = Ini::load_from_file(".env") {
if let Some(section) = v.section(None::<String>) {
@@ -71,22 +77,25 @@ pub fn init_args(args: &str, name: &str, about: &str) {
}
}
for (k, v) in matches.args {
if let Some(v) = v.vals.get(0) {
if let Some(v) = v.vals.first() {
std::env::set_var(arg_name(k), v.to_string_lossy().to_string());
}
}
}
#[allow(dead_code)]
#[inline]
pub fn get_arg(name: &str) -> String {
get_arg_or(name, "".to_owned())
}
#[allow(dead_code)]
#[inline]
pub fn get_arg_or(name: &str, default: String) -> String {
std::env::var(arg_name(name)).unwrap_or(default)
}
#[allow(dead_code)]
#[inline]
pub fn now() -> u64 {
SystemTime::now()
@@ -95,34 +104,115 @@ pub fn now() -> u64 {
.unwrap_or_default()
}
pub fn gen_sk() -> (String, Option<sign::SecretKey>) {
pub fn gen_sk(wait: u64) -> (String, Option<sign::SecretKey>) {
let sk_file = "id_ed25519";
if wait > 0 && !std::path::Path::new(sk_file).exists() {
std::thread::sleep(std::time::Duration::from_millis(wait));
}
if let Ok(mut file) = std::fs::File::open(sk_file) {
let mut contents = String::new();
if file.read_to_string(&mut contents).is_ok() {
let sk = base64::decode(&contents).unwrap_or_default();
let contents = contents.trim();
let sk = base64::decode(contents).unwrap_or_default();
if sk.len() == sign::SECRETKEYBYTES {
let mut tmp = [0u8; sign::SECRETKEYBYTES];
tmp[..].copy_from_slice(&sk);
let pk = base64::encode(&tmp[sign::SECRETKEYBYTES / 2..]);
log::info!("Private key comes from {}", sk_file);
return (pk, Some(sign::SecretKey(tmp)));
} else {
// don't use log here, since it is async
println!("Fatal error: malformed private key in {sk_file}.");
std::process::exit(1);
}
}
} else {
let (pk, sk) = sign::gen_keypair();
let pub_file = format!("{}.pub", sk_file);
let gen_func = || {
let (tmp, sk) = sign::gen_keypair();
(base64::encode(tmp), sk)
};
let (mut pk, mut sk) = gen_func();
for _ in 0..300 {
if !pk.contains('/') && !pk.contains(':') {
break;
}
(pk, sk) = gen_func();
}
let pub_file = format!("{sk_file}.pub");
if let Ok(mut f) = std::fs::File::create(&pub_file) {
f.write_all(base64::encode(pk).as_bytes()).ok();
f.write_all(pk.as_bytes()).ok();
if let Ok(mut f) = std::fs::File::create(sk_file) {
let s = base64::encode(&sk);
if f.write_all(s.as_bytes()).is_ok() {
log::info!("Private/public key written to {}/{}", sk_file, pub_file);
log::debug!("Public key: {:?}", pk);
return (base64::encode(pk), Some(sk));
log::debug!("Public key: {}", pk);
return (pk, Some(sk));
}
}
}
}
("".to_owned(), None)
}
#[cfg(unix)]
pub async fn listen_signal() -> Result<()> {
use hbb_common::tokio;
use hbb_common::tokio::signal::unix::{signal, SignalKind};
tokio::spawn(async {
let mut s = signal(SignalKind::terminate())?;
let terminate = s.recv();
let mut s = signal(SignalKind::interrupt())?;
let interrupt = s.recv();
let mut s = signal(SignalKind::quit())?;
let quit = s.recv();
tokio::select! {
_ = terminate => {
log::info!("signal terminate");
}
_ = interrupt => {
log::info!("signal interrupt");
}
_ = quit => {
log::info!("signal quit");
}
}
Ok(())
})
.await?
}
#[cfg(not(unix))]
pub async fn listen_signal() -> Result<()> {
let () = std::future::pending().await;
unreachable!();
}
pub fn check_software_update() {
const ONE_DAY_IN_SECONDS: u64 = 60 * 60 * 24;
std::thread::spawn(move || loop {
std::thread::spawn(move || allow_err!(check_software_update_()));
std::thread::sleep(std::time::Duration::from_secs(ONE_DAY_IN_SECONDS));
});
}
#[tokio::main(flavor = "current_thread")]
async fn check_software_update_() -> hbb_common::ResultType<()> {
let (request, url) = hbb_common::version_check_request(hbb_common::VER_TYPE_RUSTDESK_SERVER.to_string());
let latest_release_response = reqwest::Client::builder().build()?
.post(url)
.json(&request)
.send()
.await?;
let bytes = latest_release_response.bytes().await?;
let resp: hbb_common::VersionCheckResponse = serde_json::from_slice(&bytes)?;
let response_url = resp.url;
let latest_release_version = response_url.rsplit('/').next().unwrap_or_default();
if get_version_number(&latest_release_version) > get_version_number(crate::version::VERSION) {
log::info!("new version is available: {}", latest_release_version);
}
Ok(())
}

View File

@@ -1,6 +1,5 @@
use async_trait::async_trait;
use hbb_common::{log, ResultType};
use serde_json::value::Value;
use sqlx::{
sqlite::SqliteConnectOptions, ConnectOptions, Connection, Error as SqlxError, SqliteConnection,
};
@@ -8,9 +7,6 @@ use std::{ops::DerefMut, str::FromStr};
//use sqlx::postgres::PgPoolOptions;
//use sqlx::mysql::MySqlPoolOptions;
pub(crate) type DB = sqlx::Sqlite;
pub(crate) type MapValue = serde_json::map::Map<String, Value>;
pub(crate) type MapStr = std::collections::HashMap<String, String>;
type Pool = deadpool::managed::Pool<DbPool>;
pub struct DbPool {
@@ -56,7 +52,7 @@ impl Database {
std::fs::File::create(url).ok();
}
let n: usize = std::env::var("MAX_DATABASE_CONNECTIONS")
.unwrap_or("1".to_owned())
.unwrap_or_else(|_| "1".to_owned())
.parse()
.unwrap_or(1);
log::debug!("MAX_DATABASE_CONNECTIONS={}", n);
@@ -107,36 +103,11 @@ impl Database {
.await?)
}
pub async fn get_peer_id(&self, guid: &[u8]) -> ResultType<Option<String>> {
Ok(sqlx::query!("select id from peer where guid = ?", guid)
.fetch_optional(self.pool.get().await?.deref_mut())
.await?
.map(|x| x.id))
}
#[inline]
pub async fn get_conn(&self) -> ResultType<deadpool::managed::Object<DbPool>> {
Ok(self.pool.get().await?)
}
pub async fn update_peer(&self, payload: MapValue, guid: &[u8]) -> ResultType<()> {
let mut conn = self.get_conn().await?;
let mut tx = conn.begin().await?;
if let Some(v) = payload.get("note") {
let v = get_str(v);
sqlx::query!("update peer set note = ? where guid = ?", v, guid)
.execute(&mut tx)
.await?;
}
tx.commit().await?;
Ok(())
}
pub async fn insert_peer(
&self,
id: &str,
uuid: &Vec<u8>,
pk: &Vec<u8>,
uuid: &[u8],
pk: &[u8],
info: &str,
) -> ResultType<Vec<u8>> {
let guid = uuid::Uuid::new_v4().as_bytes().to_vec();
@@ -157,7 +128,7 @@ impl Database {
&self,
guid: &Vec<u8>,
id: &str,
pk: &Vec<u8>,
pk: &[u8],
info: &str,
) -> ResultType<()> {
sqlx::query!(
@@ -208,24 +179,3 @@ mod tests {
hbb_common::futures::future::join_all(jobs).await;
}
}
#[inline]
pub fn guid2str(guid: &Vec<u8>) -> String {
let mut bytes = [0u8; 16];
bytes[..].copy_from_slice(&guid);
uuid::Uuid::from_bytes(bytes).to_string()
}
pub(crate) fn get_str(v: &Value) -> Option<&str> {
match v {
Value::String(v) => {
let v = v.trim();
if v.is_empty() {
None
} else {
Some(v)
}
}
_ => None,
}
}

View File

@@ -13,10 +13,9 @@ fn main() -> ResultType<()> {
.write_mode(WriteMode::Async)
.start()?;
let args = format!(
"-p, --port=[NUMBER(default={})] 'Sets the listening port'
"-p, --port=[NUMBER(default={RELAY_PORT})] 'Sets the listening port'
-k, --key=[KEY] 'Only allow the client with the same key'
",
RELAY_PORT,
);
let matches = App::new("hbbr")
.version(version::VERSION)
@@ -29,9 +28,18 @@ fn main() -> ResultType<()> {
section.iter().for_each(|(k, v)| std::env::set_var(k, v));
}
}
let mut port = RELAY_PORT;
if let Ok(v) = std::env::var("PORT") {
let v: i32 = v.parse().unwrap_or_default();
if v > 0 {
port = v + 1;
}
}
start(
matches.value_of("port").unwrap_or(&RELAY_PORT.to_string()),
matches.value_of("key").unwrap_or(""),
matches.value_of("port").unwrap_or(&port.to_string()),
matches
.value_of("key")
.unwrap_or(&std::env::var("KEY").unwrap_or_default()),
)?;
Ok(())
}

View File

@@ -15,15 +15,14 @@ fn main() -> ResultType<()> {
.start()?;
let args = format!(
"-c --config=[FILE] +takes_value 'Sets a custom config file'
-p, --port=[NUMBER(default={})] 'Sets the listening port'
-p, --port=[NUMBER(default={RENDEZVOUS_PORT})] 'Sets the listening port'
-s, --serial=[NUMBER(default=0)] 'Sets configure update serial number'
-R, --rendezvous-servers=[HOSTS] 'Sets rendezvous servers, seperated by colon'
-R, --rendezvous-servers=[HOSTS] 'Sets rendezvous servers, separated by comma'
-u, --software-url=[URL] 'Sets download url of RustDesk software of newest version'
-r, --relay-servers=[HOST] 'Sets the default relay servers, seperated by colon'
-M, --rmem=[NUMBER(default={})] 'Sets UDP recv buffer size, set system rmem_max first, e.g., sudo sysctl -w net.core.rmem_max=52428800. vi /etc/sysctl.conf, net.core.rmem_max=52428800, sudo sysctl p'
-r, --relay-servers=[HOST] 'Sets the default relay servers, separated by comma'
-M, --rmem=[NUMBER(default={RMEM})] 'Sets UDP recv buffer size, set system rmem_max first, e.g., sudo sysctl -w net.core.rmem_max=52428800. vi /etc/sysctl.conf, net.core.rmem_max=52428800, sudo sysctl p'
, --mask=[MASK] 'Determine if the connection comes from LAN, e.g. 192.168.0.0/16'
-k, --key=[KEY] 'Only allow the client with the same key'",
RENDEZVOUS_PORT,
RMEM,
);
init_args(&args, "hbbs", "RustDesk ID/Rendezvous Server");
let port = get_arg_or("port", RENDEZVOUS_PORT.to_string()).parse::<i32>()?;
@@ -32,6 +31,7 @@ fn main() -> ResultType<()> {
}
let rmem = get_arg("rmem").parse::<usize>().unwrap_or(RMEM);
let serial: i32 = get_arg("serial").parse().unwrap_or(0);
RendezvousServer::start(port, serial, &get_arg("key"), rmem)?;
crate::common::check_software_update();
RendezvousServer::start(port, serial, &get_arg_or("key", "-".to_owned()), rmem)?;
Ok(())
}

View File

@@ -1,6 +1,7 @@
use crate::common::*;
use crate::database;
use hbb_common::{
bytes::Bytes,
log,
rendezvous_proto::*,
tokio::sync::{Mutex, RwLock},
@@ -9,15 +10,18 @@ use hbb_common::{
use serde_derive::{Deserialize, Serialize};
use std::{collections::HashMap, collections::HashSet, net::SocketAddr, sync::Arc, time::Instant};
type IpBlockMap = HashMap<String, ((u32, Instant), (HashSet<String>, Instant))>;
type UserStatusMap = HashMap<Vec<u8>, Arc<(Option<Vec<u8>>, bool)>>;
type IpChangesMap = HashMap<String, (Instant, HashMap<String, i32>)>;
lazy_static::lazy_static! {
pub(crate) static ref IP_BLOCKER: Mutex<HashMap<String, ((u32, Instant), (HashSet<String>, Instant))>> = Default::default();
pub(crate) static ref USER_STATUS: RwLock<HashMap<Vec<u8>, Arc<(Option<Vec<u8>>, bool)>>> = Default::default();
pub(crate) static ref IP_CHANGES: Mutex<HashMap<String, (Instant, HashMap<String, i32>)>> = Default::default();
pub(crate) static ref IP_BLOCKER: Mutex<IpBlockMap> = Default::default();
pub(crate) static ref USER_STATUS: RwLock<UserStatusMap> = Default::default();
pub(crate) static ref IP_CHANGES: Mutex<IpChangesMap> = Default::default();
}
pub static IP_CHANGE_DUR: u64 = 180;
pub static IP_CHANGE_DUR_X2: u64 = IP_CHANGE_DUR * 2;
pub static DAY_SECONDS: u64 = 3600 * 24;
pub static IP_BLOCK_DUR: u64 = 60;
pub const IP_CHANGE_DUR: u64 = 180;
pub const IP_CHANGE_DUR_X2: u64 = IP_CHANGE_DUR * 2;
pub const DAY_SECONDS: u64 = 3600 * 24;
pub const IP_BLOCK_DUR: u64 = 60;
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub(crate) struct PeerInfo {
@@ -25,16 +29,15 @@ pub(crate) struct PeerInfo {
pub(crate) ip: String,
}
#[derive(Clone, Debug)]
pub(crate) struct Peer {
pub(crate) socket_addr: SocketAddr,
pub(crate) last_reg_time: Instant,
pub(crate) guid: Vec<u8>,
pub(crate) uuid: Vec<u8>,
pub(crate) pk: Vec<u8>,
pub(crate) user: Option<Vec<u8>>,
pub(crate) uuid: Bytes,
pub(crate) pk: Bytes,
// pub(crate) user: Option<Vec<u8>>,
pub(crate) info: PeerInfo,
pub(crate) disabled: bool,
// pub(crate) disabled: bool,
pub(crate) reg_pk: (u32, Instant), // how often register_pk
}
@@ -44,11 +47,11 @@ impl Default for Peer {
socket_addr: "0.0.0.0:0".parse().unwrap(),
last_reg_time: get_expired_time(),
guid: Vec::new(),
uuid: Vec::new(),
pk: Vec::new(),
uuid: Bytes::new(),
pk: Bytes::new(),
info: Default::default(),
user: None,
disabled: false,
// user: None,
// disabled: false,
reg_pk: (0, get_expired_time()),
}
}
@@ -65,7 +68,6 @@ pub(crate) struct PeerMap {
impl PeerMap {
pub(crate) async fn new() -> ResultType<Self> {
let db = std::env::var("DB_URL").unwrap_or({
#[allow(unused_mut)]
let mut db = "db_v2.sqlite3".to_owned();
#[cfg(all(windows, not(debug_assertions)))]
{
@@ -75,7 +77,7 @@ impl PeerMap {
}
#[cfg(not(windows))]
{
db = format!("./{}", db);
db = format!("./{db}");
}
db
});
@@ -93,8 +95,8 @@ impl PeerMap {
id: String,
peer: LockPeer,
addr: SocketAddr,
uuid: Vec<u8>,
pk: Vec<u8>,
uuid: Bytes,
pk: Bytes,
ip: String,
) -> register_pk_response::Result {
log::info!("update_pk {} {:?} {:?} {:?}", id, addr, uuid, pk);
@@ -132,24 +134,22 @@ impl PeerMap {
#[inline]
pub(crate) async fn get(&self, id: &str) -> Option<LockPeer> {
let p = self.map.read().await.get(id).map(|x| x.clone());
let p = self.map.read().await.get(id).cloned();
if p.is_some() {
return p;
} else {
if let Ok(Some(v)) = self.db.get_peer(id).await {
let peer = Peer {
guid: v.guid,
uuid: v.uuid,
pk: v.pk,
user: v.user,
info: serde_json::from_str::<PeerInfo>(&v.info).unwrap_or_default(),
disabled: v.status == Some(0),
..Default::default()
};
let peer = Arc::new(RwLock::new(peer));
self.map.write().await.insert(id.to_owned(), peer.clone());
return Some(peer);
}
} else if let Ok(Some(v)) = self.db.get_peer(id).await {
let peer = Peer {
guid: v.guid,
uuid: v.uuid.into(),
pk: v.pk.into(),
// user: v.user,
info: serde_json::from_str::<PeerInfo>(&v.info).unwrap_or_default(),
// disabled: v.status == Some(0),
..Default::default()
};
let peer = Arc::new(RwLock::new(peer));
self.map.write().await.insert(id.to_owned(), peer.clone());
return Some(peer);
}
None
}
@@ -170,16 +170,11 @@ impl PeerMap {
#[inline]
pub(crate) async fn get_in_memory(&self, id: &str) -> Option<LockPeer> {
self.map.read().await.get(id).map(|x| x.clone())
self.map.read().await.get(id).cloned()
}
#[inline]
pub(crate) async fn is_in_memory(&self, id: &str) -> bool {
self.map.read().await.contains_key(id)
}
#[inline]
pub(crate) async fn remove(&self, id: &str) {
self.map.write().await.remove(id);
}
}

View File

@@ -8,7 +8,7 @@ use hbb_common::{
protobuf::Message as _,
rendezvous_proto::*,
sleep,
tcp::{new_listener, FramedStream},
tcp::{listen_any, FramedStream},
timeout,
tokio::{
self,
@@ -25,6 +25,7 @@ use std::{
io::prelude::*,
io::Error,
net::SocketAddr,
sync::atomic::{AtomicUsize, Ordering},
};
type Usage = (usize, usize, usize, usize);
@@ -36,13 +37,13 @@ lazy_static::lazy_static! {
static ref BLOCKLIST: RwLock<HashSet<String>> = Default::default();
}
static mut DOWNGRADE_THRESHOLD: f64 = 0.66;
static mut DOWNGRADE_START_CHECK: usize = 1800_000; // in ms
static mut LIMIT_SPEED: usize = 4 * 1024 * 1024; // in bit/s
static mut TOTAL_BANDWIDTH: usize = 1024 * 1024 * 1024; // in bit/s
static mut SINGLE_BANDWIDTH: usize = 16 * 1024 * 1024; // in bit/s
const BLACKLIST_FILE: &'static str = "blacklist.txt";
const BLOCKLIST_FILE: &'static str = "blocklist.txt";
static DOWNGRADE_THRESHOLD_100: AtomicUsize = AtomicUsize::new(66); // 0.66
static DOWNGRADE_START_CHECK: AtomicUsize = AtomicUsize::new(1_800_000); // in ms
static LIMIT_SPEED: AtomicUsize = AtomicUsize::new(4 * 1024 * 1024); // in bit/s
static TOTAL_BANDWIDTH: AtomicUsize = AtomicUsize::new(1024 * 1024 * 1024); // in bit/s
static SINGLE_BANDWIDTH: AtomicUsize = AtomicUsize::new(16 * 1024 * 1024); // in bit/s
const BLACKLIST_FILE: &str = "blacklist.txt";
const BLOCKLIST_FILE: &str = "blocklist.txt";
#[tokio::main(flavor = "multi_thread")]
pub async fn start(port: &str, key: &str) -> ResultType<()> {
@@ -50,8 +51,8 @@ pub async fn start(port: &str, key: &str) -> ResultType<()> {
if let Ok(mut file) = std::fs::File::open(BLACKLIST_FILE) {
let mut contents = String::new();
if file.read_to_string(&mut contents).is_ok() {
for x in contents.split("\n") {
if let Some(ip) = x.trim().split(' ').nth(0) {
for x in contents.split('\n') {
if let Some(ip) = x.trim().split(' ').next() {
BLACKLIST.write().await.insert(ip.to_owned());
}
}
@@ -65,8 +66,8 @@ pub async fn start(port: &str, key: &str) -> ResultType<()> {
if let Ok(mut file) = std::fs::File::open(BLOCKLIST_FILE) {
let mut contents = String::new();
if file.read_to_string(&mut contents).is_ok() {
for x in contents.split("\n") {
if let Some(ip) = x.trim().split(' ').nth(0) {
for x in contents.split('\n') {
if let Some(ip) = x.trim().split(' ').next() {
BLOCKLIST.write().await.insert(ip.to_owned());
}
}
@@ -77,19 +78,21 @@ pub async fn start(port: &str, key: &str) -> ResultType<()> {
BLOCKLIST_FILE,
BLOCKLIST.read().await.len()
);
let addr = format!("0.0.0.0:{}", port);
log::info!("Listening on tcp {}", addr);
let addr2 = format!("0.0.0.0:{}", port.parse::<u16>().unwrap() + 2);
log::info!("Listening on websocket {}", addr2);
loop {
log::info!("Start");
io_loop(
new_listener(&addr, false).await?,
new_listener(&addr2, false).await?,
&key,
)
.await;
}
let port: u16 = port.parse()?;
log::info!("Listening on tcp :{}", port);
let port2 = port + 2;
log::info!("Listening on websocket :{}", port2);
let main_task = async move {
loop {
log::info!("Start");
io_loop(listen_any(port).await?, listen_any(port2).await?, &key).await;
}
};
let listen_signal = crate::common::listen_signal();
tokio::select!(
res = main_task => res,
res = listen_signal => res,
)
}
fn check_params() {
@@ -97,62 +100,60 @@ fn check_params() {
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
DOWNGRADE_THRESHOLD = tmp;
}
DOWNGRADE_THRESHOLD_100.store((tmp * 100.) as _, Ordering::SeqCst);
}
unsafe { log::info!("DOWNGRADE_THRESHOLD: {}", DOWNGRADE_THRESHOLD) };
log::info!(
"DOWNGRADE_THRESHOLD: {}",
DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100.
);
let tmp = std::env::var("DOWNGRADE_START_CHECK")
.map(|x| x.parse::<usize>().unwrap_or(0))
.unwrap_or(0);
if tmp > 0 {
unsafe {
DOWNGRADE_START_CHECK = tmp * 1000;
}
DOWNGRADE_START_CHECK.store(tmp * 1000, Ordering::SeqCst);
}
unsafe { log::info!("DOWNGRADE_START_CHECK: {}s", DOWNGRADE_START_CHECK / 1000) };
log::info!(
"DOWNGRADE_START_CHECK: {}s",
DOWNGRADE_START_CHECK.load(Ordering::SeqCst) / 1000
);
let tmp = std::env::var("LIMIT_SPEED")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
LIMIT_SPEED = (tmp * 1024. * 1024.) as usize;
}
LIMIT_SPEED.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe { log::info!("LIMIT_SPEED: {}Mb/s", LIMIT_SPEED as f64 / 1024. / 1024.) };
log::info!(
"LIMIT_SPEED: {}Mb/s",
LIMIT_SPEED.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
let tmp = std::env::var("TOTAL_BANDWIDTH")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
TOTAL_BANDWIDTH = (tmp * 1024. * 1024.) as usize;
}
TOTAL_BANDWIDTH.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe {
log::info!(
"TOTAL_BANDWIDTH: {}Mb/s",
TOTAL_BANDWIDTH as f64 / 1024. / 1024.
)
};
log::info!(
"TOTAL_BANDWIDTH: {}Mb/s",
TOTAL_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
let tmp = std::env::var("SINGLE_BANDWIDTH")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
SINGLE_BANDWIDTH = (tmp * 1024. * 1024.) as usize;
}
SINGLE_BANDWIDTH.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe {
log::info!(
"SINGLE_BANDWIDTH: {}Mb/s",
SINGLE_BANDWIDTH as f64 / 1024. / 1024.
)
};
log::info!(
"SINGLE_BANDWIDTH: {}Mb/s",
SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
)
}
async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
use std::fmt::Write;
let mut res = "".to_owned();
let mut fds = cmd.trim().split(" ");
let mut fds = cmd.trim().split(' ');
match fds.next() {
Some("h") => {
res = format!(
@@ -173,7 +174,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
}
Some("blacklist-add" | "ba") => {
if let Some(ip) = fds.next() {
for ip in ip.split("|") {
for ip in ip.split('|') {
BLACKLIST.write().await.insert(ip.to_owned());
}
}
@@ -183,7 +184,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
if ip == "all" {
BLACKLIST.write().await.clear();
} else {
for ip in ip.split("|") {
for ip in ip.split('|') {
BLACKLIST.write().await.remove(ip);
}
}
@@ -194,13 +195,13 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
res = format!("{}\n", BLACKLIST.read().await.get(ip).is_some());
} else {
for ip in BLACKLIST.read().await.clone().into_iter() {
res += &format!("{}\n", ip);
let _ = writeln!(res, "{ip}");
}
}
}
Some("blocklist-add" | "Ba") => {
if let Some(ip) = fds.next() {
for ip in ip.split("|") {
for ip in ip.split('|') {
BLOCKLIST.write().await.insert(ip.to_owned());
}
}
@@ -210,7 +211,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
if ip == "all" {
BLOCKLIST.write().await.clear();
} else {
for ip in ip.split("|") {
for ip in ip.split('|') {
BLOCKLIST.write().await.remove(ip);
}
}
@@ -221,7 +222,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
res = format!("{}\n", BLOCKLIST.read().await.get(ip).is_some());
} else {
for ip in BLOCKLIST.read().await.clone().into_iter() {
res += &format!("{}\n", ip);
let _ = writeln!(res, "{ip}");
}
}
}
@@ -229,76 +230,68 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
DOWNGRADE_THRESHOLD = v;
}
DOWNGRADE_THRESHOLD_100.store((v * 100.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}\n", DOWNGRADE_THRESHOLD);
}
res = format!(
"{}\n",
DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100.
);
}
}
Some("downgrade-start-check" | "t") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<usize>() {
if v > 0 {
unsafe {
DOWNGRADE_START_CHECK = v * 1000;
}
DOWNGRADE_START_CHECK.store(v * 1000, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}s\n", DOWNGRADE_START_CHECK / 1000);
}
res = format!("{}s\n", DOWNGRADE_START_CHECK.load(Ordering::SeqCst) / 1000);
}
}
Some("limit-speed" | "ls") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
LIMIT_SPEED = (v * 1024. * 1024.) as _;
}
LIMIT_SPEED.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", LIMIT_SPEED as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
LIMIT_SPEED.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("total-bandwidth" | "tb") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
TOTAL_BANDWIDTH = (v * 1024. * 1024.) as _;
limiter.set_speed_limit(TOTAL_BANDWIDTH as _);
}
TOTAL_BANDWIDTH.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
limiter.set_speed_limit(TOTAL_BANDWIDTH.load(Ordering::SeqCst) as _);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", TOTAL_BANDWIDTH as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
TOTAL_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("single-bandwidth" | "sb") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
SINGLE_BANDWIDTH = (v * 1024. * 1024.) as _;
}
SINGLE_BANDWIDTH.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", SINGLE_BANDWIDTH as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("usage" | "u") => {
@@ -306,15 +299,16 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
.read()
.await
.iter()
.map(|x| (x.0.clone(), x.1.clone()))
.map(|x| (x.0.clone(), *x.1))
.collect();
tmp.sort_by(|a, b| ((b.1).1).partial_cmp(&(a.1).1).unwrap());
for (ip, (elapsed, total, highest, speed)) in tmp {
if elapsed <= 0 {
if elapsed == 0 {
continue;
}
res += &format!(
"{}: {}s {:.2}MB {}kb/s {}kb/s {}kb/s\n",
let _ = writeln!(
res,
"{}: {}s {:.2}MB {}kb/s {}kb/s {}kb/s",
ip,
elapsed / 1000,
total as f64 / 1024. / 1024. / 8.,
@@ -331,7 +325,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
async fn io_loop(listener: TcpListener, listener2: TcpListener, key: &str) {
check_params();
let limiter = <Limiter>::new(unsafe { TOTAL_BANDWIDTH as _ });
let limiter = <Limiter>::new(TOTAL_BANDWIDTH.load(Ordering::SeqCst) as _);
loop {
tokio::select! {
res = listener.accept() => {
@@ -369,12 +363,12 @@ async fn handle_connection(
key: &str,
ws: bool,
) {
let ip = addr.ip().to_string();
if !ws && ip == "127.0.0.1" {
let ip = hbb_common::try_into_v4(addr).ip();
if !ws && ip.is_loopback() {
let limiter = limiter.clone();
tokio::spawn(async move {
let mut stream = stream;
let mut buffer = [0; 64];
let mut buffer = [0; 1024];
if let Ok(Ok(n)) = timeout(1000, stream.read(&mut buffer[..])).await {
if let Ok(data) = std::str::from_utf8(&buffer[..n]) {
let res = check_cmd(data, limiter).await;
@@ -384,6 +378,7 @@ async fn handle_connection(
});
return;
}
let ip = ip.to_string();
if BLOCKLIST.read().await.get(&ip).is_some() {
log::info!("{} blocked", ip);
return;
@@ -397,19 +392,30 @@ async fn handle_connection(
async fn make_pair(
stream: TcpStream,
addr: SocketAddr,
mut addr: SocketAddr,
key: &str,
limiter: Limiter,
ws: bool,
) -> ResultType<()> {
if ws {
make_pair_(
tokio_tungstenite::accept_async(stream).await?,
addr,
key,
limiter,
)
.await;
use tokio_tungstenite::tungstenite::handshake::server::{Request, Response};
let callback = |req: &Request, response: Response| {
let headers = req.headers();
let real_ip = headers
.get("X-Real-IP")
.or_else(|| headers.get("X-Forwarded-For"))
.and_then(|header_value| header_value.to_str().ok());
if let Some(ip) = real_ip {
if ip.contains('.') {
addr = format!("{ip}:0").parse().unwrap_or(addr);
} else {
addr = format!("[{ip}]:0").parse().unwrap_or(addr);
}
}
Ok(response)
};
let ws_stream = tokio_tungstenite::accept_hdr_async(stream, callback).await?;
make_pair_(ws_stream, addr, key, limiter).await;
} else {
make_pair_(FramedStream::from(stream, addr), addr, key, limiter).await;
}
@@ -420,7 +426,7 @@ async fn make_pair_(stream: impl StreamTrait, addr: SocketAddr, key: &str, limit
let mut stream = stream;
if let Ok(Some(Ok(bytes))) = timeout(30_000, stream.recv()).await {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(&bytes) {
if let Some(rendezvous_message::Union::request_relay(rf)) = msg_in.union {
if let Some(rendezvous_message::Union::RequestRelay(rf)) = msg_in.union {
if !key.is_empty() && rf.licence_key != key {
return;
}
@@ -469,10 +475,11 @@ async fn relay(
let mut highest_s = 0;
let mut downgrade: bool = false;
let mut blacked: bool = false;
let limiter = <Limiter>::new(unsafe { SINGLE_BANDWIDTH as _ });
let blacklist_limiter = <Limiter>::new(unsafe { LIMIT_SPEED as _ });
let sb = SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64;
let limiter = <Limiter>::new(sb);
let blacklist_limiter = <Limiter>::new(LIMIT_SPEED.load(Ordering::SeqCst) as _);
let downgrade_threshold =
(unsafe { SINGLE_BANDWIDTH as f64 * DOWNGRADE_THRESHOLD } / 1000.) as usize; // in bit/ms
(sb * DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100. / 1000.) as usize; // in bit/ms
let mut timer = interval(Duration::from_secs(3));
let mut last_recv_time = std::time::Instant::now();
loop {
@@ -489,7 +496,7 @@ async fn relay(
total_limiter.consume(nb).await;
total += nb;
total_s += nb;
if bytes.len() > 0 {
if !bytes.is_empty() {
stream.send_raw(bytes.into()).await?;
}
} else {
@@ -508,7 +515,7 @@ async fn relay(
total_limiter.consume(nb).await;
total += nb;
total_s += nb;
if bytes.len() > 0 {
if !bytes.is_empty() {
peer.send_raw(bytes.into()).await?;
}
} else {
@@ -530,7 +537,7 @@ async fn relay(
}
blacked = BLACKLIST.read().await.get(&ip).is_some();
tm = std::time::Instant::now();
let speed = total_s / (n as usize);
let speed = total_s / n;
if speed > highest_s {
highest_s = speed;
}
@@ -540,16 +547,17 @@ async fn relay(
(elapsed as _, total as _, highest_s as _, speed as _),
);
total_s = 0;
if elapsed > unsafe { DOWNGRADE_START_CHECK } && !downgrade {
if total > elapsed * downgrade_threshold {
downgrade = true;
log::info!(
"Downgrade {}, exceed downgrade threshold {}bit/ms in {}ms",
id,
downgrade_threshold,
elapsed
);
}
if elapsed > DOWNGRADE_START_CHECK.load(Ordering::SeqCst)
&& !downgrade
&& total > elapsed * downgrade_threshold
{
downgrade = true;
log::info!(
"Downgrade {}, exceed downgrade threshold {}bit/ms in {}ms",
id,
downgrade_threshold,
elapsed
);
}
}
}
@@ -566,7 +574,7 @@ fn get_server_sk(key: &str) -> String {
}
if key == "-" || key == "_" {
let (pk, _) = crate::common::gen_sk();
let (pk, _) = crate::common::gen_sk(300);
key = pk;
}

View File

@@ -1,9 +1,10 @@
use crate::common::*;
use crate::peer::*;
use hbb_common::{
allow_err,
allow_err, bail,
bytes::{Bytes, BytesMut},
bytes_codec::BytesCodec,
config,
futures::future::join_all,
futures_util::{
sink::SinkExt,
@@ -15,7 +16,7 @@ use hbb_common::{
register_pk_response::Result::{TOO_FREQUENT, UUID_MISMATCH},
*,
},
tcp::{new_listener, FramedStream},
tcp::{listen_any, FramedStream},
timeout,
tokio::{
self,
@@ -25,21 +26,23 @@ use hbb_common::{
time::{interval, Duration},
},
tokio_util::codec::Framed,
try_into_v4,
udp::FramedSocket,
AddrMangle, ResultType,
};
use ipnetwork::Ipv4Network;
use sodiumoxide::crypto::sign;
use std::{
collections::HashMap,
net::{IpAddr, Ipv4Addr, SocketAddr},
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
sync::atomic::{AtomicBool, AtomicUsize, Ordering},
sync::Arc,
time::Instant,
};
const ADDR_127: IpAddr = IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1));
#[derive(Clone, Debug)]
enum Data {
Msg(RendezvousMessage, SocketAddr),
Msg(Box<RendezvousMessage>, SocketAddr),
RelayServers0(String),
RelayServers(RelayServers),
}
@@ -53,10 +56,20 @@ enum Sink {
}
type Sender = mpsc::UnboundedSender<Data>;
type Receiver = mpsc::UnboundedReceiver<Data>;
static mut ROTATION_RELAY_SERVER: usize = 0;
static ROTATION_RELAY_SERVER: AtomicUsize = AtomicUsize::new(0);
type RelayServers = Vec<String>;
static CHECK_RELAY_TIMEOUT: u64 = 3_000;
static mut ALWAYS_USE_RELAY: bool = false;
const CHECK_RELAY_TIMEOUT: u64 = 3_000;
static ALWAYS_USE_RELAY: AtomicBool = AtomicBool::new(false);
#[derive(Clone)]
struct Inner {
serial: i32,
version: String,
software_url: String,
mask: Option<Ipv4Network>,
local_ip: String,
sk: Option<sign::SecretKey>,
}
#[derive(Clone)]
pub struct RendezvousServer {
@@ -65,11 +78,8 @@ pub struct RendezvousServer {
tx: Sender,
relay_servers: Arc<RelayServers>,
relay_servers0: Arc<RelayServers>,
serial: i32,
rendezvous_servers: Arc<Vec<String>>,
version: String,
software_url: String,
sk: Option<sign::SecretKey>,
inner: Arc<Inner>,
}
enum LoopFailure {
@@ -81,107 +91,133 @@ enum LoopFailure {
impl RendezvousServer {
#[tokio::main(flavor = "multi_thread")]
pub async fn start(
port: i32,
serial: i32,
key: &str,
rmem: usize,
) -> ResultType<()> {
let addr = format!("0.0.0.0:{}", port);
let addr2 = format!("0.0.0.0:{}", port - 1);
let addr3 = format!("0.0.0.0:{}", port + 2);
pub async fn start(port: i32, serial: i32, key: &str, rmem: usize) -> ResultType<()> {
let (key, sk) = Self::get_server_sk(key);
let nat_port = port - 1;
let ws_port = port + 2;
let pm = PeerMap::new().await?;
log::info!("serial={}", serial);
let rendezvous_servers = get_servers(&get_arg("rendezvous-servers"), "rendezvous-servers");
log::info!("Listening on tcp/udp {}", addr);
log::info!("Listening on tcp {}, extra port for NAT test", addr2);
log::info!("Listening on websocket {}", addr3);
let mut socket = FramedSocket::new_with_buf_size(&addr, rmem).await?;
log::info!("Listening on tcp/udp :{}", port);
log::info!("Listening on tcp :{}, extra port for NAT test", nat_port);
log::info!("Listening on websocket :{}", ws_port);
let mut socket = create_udp_listener(port, rmem).await?;
let (tx, mut rx) = mpsc::unbounded_channel::<Data>();
let software_url = get_arg("software-url");
let version = hbb_common::get_version_from_url(&software_url);
if !version.is_empty() {
log::info!("software_url: {}, version: {}", software_url, version);
}
let mask = get_arg("mask").parse().ok();
let local_ip = if mask.is_none() {
"".to_owned()
} else {
get_arg_or(
"local-ip",
local_ip_address::local_ip()
.map(|x| x.to_string())
.unwrap_or_default(),
)
};
let mut rs = Self {
tcp_punch: Arc::new(Mutex::new(HashMap::new())),
pm,
tx: tx.clone(),
relay_servers: Default::default(),
relay_servers0: Default::default(),
serial,
rendezvous_servers: Arc::new(rendezvous_servers),
version,
software_url,
sk: None,
inner: Arc::new(Inner {
serial,
version,
software_url,
sk,
mask,
local_ip,
}),
};
let key = rs.get_server_sk(key);
log::info!("mask: {:?}", rs.inner.mask);
log::info!("local-ip: {:?}", rs.inner.local_ip);
std::env::set_var("PORT_FOR_API", port.to_string());
rs.parse_relay_servers(&get_arg("relay-servers"));
let pm = rs.pm.clone();
let mut listener = new_listener(&addr, false).await?;
let mut listener2 = new_listener(&addr2, false).await?;
let mut listener3 = new_listener(&addr3, false).await?;
let mut listener = create_tcp_listener(port).await?;
let mut listener2 = create_tcp_listener(nat_port).await?;
let mut listener3 = create_tcp_listener(ws_port).await?;
let test_addr = std::env::var("TEST_HBBS").unwrap_or_default();
if std::env::var("ALWAYS_USE_RELAY")
.unwrap_or_default()
.to_uppercase()
== "Y"
{
unsafe {
ALWAYS_USE_RELAY = true;
}
ALWAYS_USE_RELAY.store(true, Ordering::SeqCst);
}
log::info!(
"ALWAYS_USE_RELAY={}",
if unsafe { ALWAYS_USE_RELAY } {
if ALWAYS_USE_RELAY.load(Ordering::SeqCst) {
"Y"
} else {
"N"
}
);
if test_addr.to_lowercase() != "no" {
let test_addr = (if test_addr.is_empty() {
addr.replace("0.0.0.0", "127.0.0.1")
let test_addr = if test_addr.is_empty() {
listener.local_addr()?
} else {
test_addr
})
.parse::<SocketAddr>()?;
test_addr.parse()?
};
tokio::spawn(async move {
allow_err!(test_hbbs(test_addr).await);
if let Err(err) = test_hbbs(test_addr).await {
if test_addr.is_ipv6() && test_addr.ip().is_unspecified() {
let mut test_addr = test_addr;
test_addr.set_ip(IpAddr::V4(Ipv4Addr::UNSPECIFIED));
if let Err(err) = test_hbbs(test_addr).await {
log::error!("Failed to run hbbs test with {test_addr}: {err}");
std::process::exit(1);
}
} else {
log::error!("Failed to run hbbs test with {test_addr}: {err}");
std::process::exit(1);
}
}
});
};
loop {
log::info!("Start");
match rs
.io_loop(
&mut rx,
&mut listener,
&mut listener2,
&mut listener3,
&mut socket,
&key,
)
.await
{
LoopFailure::UdpSocket => {
drop(socket);
socket = FramedSocket::new_with_buf_size(&addr, rmem).await?;
}
LoopFailure::Listener => {
drop(listener);
listener = new_listener(&addr, false).await?;
}
LoopFailure::Listener2 => {
drop(listener2);
listener2 = new_listener(&addr2, false).await?;
}
LoopFailure::Listener3 => {
drop(listener3);
listener3 = new_listener(&addr3, false).await?;
let main_task = async move {
loop {
log::info!("Start");
match rs
.io_loop(
&mut rx,
&mut listener,
&mut listener2,
&mut listener3,
&mut socket,
&key,
)
.await
{
LoopFailure::UdpSocket => {
drop(socket);
socket = create_udp_listener(port, rmem).await?;
}
LoopFailure::Listener => {
drop(listener);
listener = create_tcp_listener(port).await?;
}
LoopFailure::Listener2 => {
drop(listener2);
listener2 = create_tcp_listener(nat_port).await?;
}
LoopFailure::Listener3 => {
drop(listener3);
listener3 = create_tcp_listener(ws_port).await?;
}
}
}
}
};
let listen_signal = listen_signal();
tokio::select!(
res = main_task => res,
res = listen_signal => res,
)
}
async fn io_loop(
@@ -207,7 +243,7 @@ impl RendezvousServer {
}
Some(data) = rx.recv() => {
match data {
Data::Msg(msg, addr) => { allow_err!(socket.send(&msg, addr).await); }
Data::Msg(msg, addr) => { allow_err!(socket.send(msg.as_ref(), addr).await); }
Data::RelayServers0(rs) => { self.parse_relay_servers(&rs); }
Data::RelayServers(rs) => { self.relay_servers = Arc::new(rs); }
}
@@ -277,17 +313,17 @@ impl RendezvousServer {
socket: &mut FramedSocket,
key: &str,
) -> ResultType<()> {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(&bytes) {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(bytes) {
match msg_in.union {
Some(rendezvous_message::Union::register_peer(rp)) => {
Some(rendezvous_message::Union::RegisterPeer(rp)) => {
// B registered
if rp.id.len() > 0 {
if !rp.id.is_empty() {
log::trace!("New peer registered: {:?} {:?}", &rp.id, &addr);
self.update_addr(rp.id, addr, socket).await?;
if self.serial > rp.serial {
if self.inner.serial > rp.serial {
let mut msg_out = RendezvousMessage::new();
msg_out.set_configure_update(ConfigUpdate {
serial: self.serial,
serial: self.inner.serial,
rendezvous_servers: (*self.rendezvous_servers).clone(),
..Default::default()
});
@@ -295,7 +331,7 @@ impl RendezvousServer {
}
}
}
Some(rendezvous_message::Union::register_pk(rk)) => {
Some(rendezvous_message::Union::RegisterPk(rk)) => {
if rk.uuid.is_empty() || rk.pk.is_empty() {
return Ok(());
}
@@ -358,12 +394,10 @@ impl RendezvousServer {
*tm = Instant::now();
ips.clear();
ips.insert(ip.clone(), 1);
} else if let Some(v) = ips.get_mut(&ip) {
*v += 1;
} else {
if let Some(v) = ips.get_mut(&ip) {
*v += 1;
} else {
ips.insert(ip.clone(), 1);
}
ips.insert(ip.clone(), 1);
}
} else {
lock.insert(
@@ -382,7 +416,7 @@ impl RendezvousServer {
});
socket.send(&msg_out, addr).await?
}
Some(rendezvous_message::Union::punch_hole_request(ph)) => {
Some(rendezvous_message::Union::PunchHoleRequest(ph)) => {
if self.pm.is_in_memory(&ph.id).await {
self.handle_udp_punch_hole_request(addr, ph, key).await?;
} else {
@@ -394,15 +428,17 @@ impl RendezvousServer {
});
}
}
Some(rendezvous_message::Union::punch_hole_sent(phs)) => {
Some(rendezvous_message::Union::PunchHoleSent(phs)) => {
self.handle_hole_sent(phs, addr, Some(socket)).await?;
}
Some(rendezvous_message::Union::local_addr(la)) => {
Some(rendezvous_message::Union::LocalAddr(la)) => {
self.handle_local_addr(la, addr, Some(socket)).await?;
}
Some(rendezvous_message::Union::configure_update(mut cu)) => {
if addr.ip() == ADDR_127 && cu.serial > self.serial {
self.serial = cu.serial;
Some(rendezvous_message::Union::ConfigureUpdate(mut cu)) => {
if try_into_v4(addr).ip().is_loopback() && cu.serial > self.inner.serial {
let mut inner: Inner = (*self.inner).clone();
inner.serial = cu.serial;
self.inner = Arc::new(inner);
self.rendezvous_servers = Arc::new(
cu.rendezvous_servers
.drain(..)
@@ -414,16 +450,16 @@ impl RendezvousServer {
);
log::info!(
"configure updated: serial={} rendezvous-servers={:?}",
self.serial,
self.inner.serial,
self.rendezvous_servers
);
}
}
Some(rendezvous_message::Union::software_update(su)) => {
if !self.version.is_empty() && su.url != self.version {
Some(rendezvous_message::Union::SoftwareUpdate(su)) => {
if !self.inner.version.is_empty() && su.url != self.inner.version {
let mut msg_out = RendezvousMessage::new();
msg_out.set_software_update(SoftwareUpdate {
url: self.software_url.clone(),
url: self.inner.software_url.clone(),
..Default::default()
});
socket.send(&msg_out, addr).await?;
@@ -444,64 +480,72 @@ impl RendezvousServer {
key: &str,
ws: bool,
) -> bool {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(&bytes) {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(bytes) {
match msg_in.union {
Some(rendezvous_message::Union::punch_hole_request(ph)) => {
Some(rendezvous_message::Union::PunchHoleRequest(ph)) => {
// there maybe several attempt, so sink can be none
if let Some(sink) = sink.take() {
self.tcp_punch.lock().await.insert(addr, sink);
self.tcp_punch.lock().await.insert(try_into_v4(addr), sink);
}
allow_err!(self.handle_tcp_punch_hole_request(addr, ph, &key, ws).await);
allow_err!(self.handle_tcp_punch_hole_request(addr, ph, key, ws).await);
return true;
}
Some(rendezvous_message::Union::request_relay(mut rf)) => {
Some(rendezvous_message::Union::RequestRelay(mut rf)) => {
// there maybe several attempt, so sink can be none
if let Some(sink) = sink.take() {
self.tcp_punch.lock().await.insert(addr, sink);
self.tcp_punch.lock().await.insert(try_into_v4(addr), sink);
}
if let Some(peer) = self.pm.get_in_memory(&rf.id).await {
let mut msg_out = RendezvousMessage::new();
rf.socket_addr = AddrMangle::encode(addr);
rf.socket_addr = AddrMangle::encode(addr).into();
msg_out.set_request_relay(rf);
let peer_addr = peer.read().await.socket_addr;
self.tx.send(Data::Msg(msg_out, peer_addr)).ok();
self.tx.send(Data::Msg(msg_out.into(), peer_addr)).ok();
}
return true;
}
Some(rendezvous_message::Union::relay_response(mut rr)) => {
Some(rendezvous_message::Union::RelayResponse(mut rr)) => {
let addr_b = AddrMangle::decode(&rr.socket_addr);
rr.socket_addr = Default::default();
let id = rr.get_id();
let id = rr.id();
if !id.is_empty() {
let pk = self.get_pk(&rr.version, id.to_owned()).await;
rr.set_pk(pk);
}
let mut msg_out = RendezvousMessage::new();
if !rr.relay_server.is_empty() {
if self.is_lan(addr_b) {
// https://github.com/rustdesk/rustdesk-server/issues/24
rr.relay_server = self.inner.local_ip.clone();
} else if rr.relay_server == self.inner.local_ip {
rr.relay_server = self.get_relay_server(addr.ip(), addr_b.ip());
}
}
msg_out.set_relay_response(rr);
allow_err!(self.send_to_tcp_sync(msg_out, addr_b).await);
}
Some(rendezvous_message::Union::punch_hole_sent(phs)) => {
Some(rendezvous_message::Union::PunchHoleSent(phs)) => {
allow_err!(self.handle_hole_sent(phs, addr, None).await);
}
Some(rendezvous_message::Union::local_addr(la)) => {
Some(rendezvous_message::Union::LocalAddr(la)) => {
allow_err!(self.handle_local_addr(la, addr, None).await);
}
Some(rendezvous_message::Union::test_nat_request(tar)) => {
Some(rendezvous_message::Union::TestNatRequest(tar)) => {
let mut msg_out = RendezvousMessage::new();
let mut res = TestNatResponse {
port: addr.port() as _,
..Default::default()
};
if self.serial > tar.serial {
if self.inner.serial > tar.serial {
let mut cu = ConfigUpdate::new();
cu.serial = self.serial;
cu.serial = self.inner.serial;
cu.rendezvous_servers = (*self.rendezvous_servers).clone();
res.cu = MessageField::from_option(Some(cu));
}
msg_out.set_test_nat_response(res);
Self::send_to_sink(sink, msg_out).await;
}
Some(rendezvous_message::Union::register_pk(_rk)) => {
Some(rendezvous_message::Union::RegisterPk(_)) => {
let res = register_pk_response::Result::NOT_SUPPORT;
let mut msg_out = RendezvousMessage::new();
msg_out.set_register_pk_response(RegisterPkResponse {
@@ -530,7 +574,7 @@ impl RendezvousServer {
ip != old.socket_addr.ip()
} else {
ip.to_string() != old.info.ip
} && ip != ADDR_127;
} && !ip.is_loopback();
let request_pk = old.pk.is_empty() || ip_change;
if !request_pk {
old.socket_addr = socket_addr;
@@ -577,7 +621,7 @@ impl RendezvousServer {
);
let mut msg_out = RendezvousMessage::new();
let mut p = PunchHoleResponse {
socket_addr: AddrMangle::encode(addr),
socket_addr: AddrMangle::encode(addr).into(),
pk: self.get_pk(&phs.version, phs.id).await,
relay_server: phs.relay_server.clone(),
..Default::default()
@@ -634,6 +678,7 @@ impl RendezvousServer {
key: &str,
ws: bool,
) -> ResultType<(RendezvousMessage, Option<SocketAddr>)> {
let mut ph = ph;
if !key.is_empty() && ph.licence_key != key {
let mut msg_out = RendezvousMessage::new();
msg_out.set_punch_hole_response(PunchHoleResponse {
@@ -662,29 +707,25 @@ impl RendezvousServer {
return Ok((msg_out, None));
}
let mut msg_out = RendezvousMessage::new();
if unsafe { ALWAYS_USE_RELAY } {
let relay_server = self.get_relay_server(addr.ip(), peer_addr.ip());
if !relay_server.is_empty() {
msg_out.set_request_relay(RequestRelay {
relay_server,
..Default::default()
});
return Ok((msg_out, Some(peer_addr)));
let peer_is_lan = self.is_lan(peer_addr);
let is_lan = self.is_lan(addr);
let mut relay_server = self.get_relay_server(addr.ip(), peer_addr.ip());
if ALWAYS_USE_RELAY.load(Ordering::SeqCst) || (peer_is_lan ^ is_lan) {
if peer_is_lan {
// https://github.com/rustdesk/rustdesk-server/issues/24
relay_server = self.inner.local_ip.clone()
}
ph.nat_type = NatType::SYMMETRIC.into(); // will force relay
}
let same_intranet = !ws
&& match peer_addr {
SocketAddr::V4(a) => match addr {
SocketAddr::V4(b) => a.ip() == b.ip(),
let same_intranet: bool = !ws
&& (peer_is_lan && is_lan || {
match (peer_addr, addr) {
(SocketAddr::V4(a), SocketAddr::V4(b)) => a.ip() == b.ip(),
(SocketAddr::V6(a), SocketAddr::V6(b)) => a.ip() == b.ip(),
_ => false,
},
SocketAddr::V6(a) => match addr {
SocketAddr::V6(b) => a.ip() == b.ip(),
_ => false,
},
};
let socket_addr = AddrMangle::encode(addr);
let relay_server = self.get_relay_server(addr.ip(), peer_addr.ip());
}
});
let socket_addr = AddrMangle::encode(addr).into();
if same_intranet {
log::debug!(
"Fetch local addr {:?} {:?} request from {:?}",
@@ -711,20 +752,49 @@ impl RendezvousServer {
..Default::default()
});
}
return Ok((msg_out, Some(peer_addr)));
Ok((msg_out, Some(peer_addr)))
} else {
let mut msg_out = RendezvousMessage::new();
msg_out.set_punch_hole_response(PunchHoleResponse {
failure: punch_hole_response::Failure::ID_NOT_EXIST.into(),
..Default::default()
});
return Ok((msg_out, None));
Ok((msg_out, None))
}
}
#[inline]
async fn handle_online_request(
&mut self,
stream: &mut FramedStream,
peers: Vec<String>,
) -> ResultType<()> {
let mut states = BytesMut::zeroed((peers.len() + 7) / 8);
for (i, peer_id) in peers.iter().enumerate() {
if let Some(peer) = self.pm.get_in_memory(peer_id).await {
let elapsed = peer.read().await.last_reg_time.elapsed().as_millis() as i32;
// bytes index from left to right
let states_idx = i / 8;
let bit_idx = 7 - i % 8;
if elapsed < REG_TIMEOUT {
states[states_idx] |= 0x01 << bit_idx;
}
}
}
let mut msg_out = RendezvousMessage::new();
msg_out.set_online_response(OnlineResponse {
states: states.into(),
..Default::default()
});
stream.send(&msg_out).await?;
Ok(())
}
#[inline]
async fn send_to_tcp(&mut self, msg: RendezvousMessage, addr: SocketAddr) {
let mut tcp = self.tcp_punch.lock().await.remove(&addr);
let mut tcp = self.tcp_punch.lock().await.remove(&try_into_v4(addr));
tokio::spawn(async move {
Self::send_to_sink(&mut tcp, msg).await;
});
@@ -752,7 +822,7 @@ impl RendezvousServer {
msg: RendezvousMessage,
addr: SocketAddr,
) -> ResultType<()> {
let mut sink = self.tcp_punch.lock().await.remove(&addr);
let mut sink = self.tcp_punch.lock().await.remove(&try_into_v4(addr));
Self::send_to_sink(&mut sink, msg).await;
Ok(())
}
@@ -767,7 +837,7 @@ impl RendezvousServer {
) -> ResultType<()> {
let (msg, to_addr) = self.handle_punch_hole_request(addr, ph, key, ws).await?;
if let Some(addr) = to_addr {
self.tx.send(Data::Msg(msg, addr))?;
self.tx.send(Data::Msg(msg.into(), addr))?;
} else {
self.send_to_tcp_sync(msg, addr).await?;
}
@@ -783,7 +853,7 @@ impl RendezvousServer {
) -> ResultType<()> {
let (msg, to_addr) = self.handle_punch_hole_request(addr, ph, key, false).await?;
self.tx.send(Data::Msg(
msg,
msg.into(),
match to_addr {
Some(addr) => addr,
None => addr,
@@ -828,22 +898,21 @@ impl RendezvousServer {
self.relay_servers = self.relay_servers0.clone();
}
fn get_relay_server(&self, pa: IpAddr, pb: IpAddr) -> String {
fn get_relay_server(&self, _pa: IpAddr, _pb: IpAddr) -> String {
if self.relay_servers.is_empty() {
return "".to_owned();
} else if self.relay_servers.len() == 1 {
return self.relay_servers[0].clone();
}
let i = unsafe {
ROTATION_RELAY_SERVER += 1;
ROTATION_RELAY_SERVER % self.relay_servers.len()
};
let i = ROTATION_RELAY_SERVER.fetch_add(1, Ordering::SeqCst) % self.relay_servers.len();
self.relay_servers[i].clone()
}
async fn check_cmd(&self, cmd: &str) -> String {
use std::fmt::Write as _;
let mut res = "".to_owned();
let mut fds = cmd.trim().split(" ");
let mut fds = cmd.trim().split(' ');
match fds.next() {
Some("h") => {
res = format!(
@@ -861,7 +930,7 @@ impl RendezvousServer {
self.tx.send(Data::RelayServers0(rs.to_owned())).ok();
} else {
for ip in self.relay_servers.iter() {
res += &format!("{}\n", ip);
let _ = writeln!(res, "{ip}");
}
}
}
@@ -877,8 +946,9 @@ impl RendezvousServer {
if start < 0 {
if let Some(ip) = ip {
if let Some((a, b)) = lock.get(ip) {
res += &format!(
"{}/{}s {}/{}s\n",
let _ = writeln!(
res,
"{}/{}s {}/{}s",
a.0,
a.1.elapsed().as_secs(),
b.0.len(),
@@ -903,8 +973,9 @@ impl RendezvousServer {
continue;
}
if let Some((ip, (a, b))) = x {
res += &format!(
"{}: {}/{}s {}/{}s\n",
let _ = writeln!(
res,
"{}: {}/{}s {}/{}s",
ip,
a.0,
a.1.elapsed().as_secs(),
@@ -921,10 +992,10 @@ impl RendezvousServer {
res = format!("{}\n", lock.len());
let id = fds.next();
let mut start = id.map(|x| x.parse::<i32>().unwrap_or(-1)).unwrap_or(-1);
if start < 0 || start > 10_000_000 {
if !(0..=10_000_000).contains(&start) {
if let Some(id) = id {
if let Some((tm, ips)) = lock.get(id) {
res += &format!("{}s {:?}\n", tm.elapsed().as_secs(), ips);
let _ = writeln!(res, "{}s {:?}", tm.elapsed().as_secs(), ips);
}
if fds.next() == Some("-") {
lock.remove(id);
@@ -944,7 +1015,7 @@ impl RendezvousServer {
continue;
}
if let Some((id, (tm, ips))) = x {
res += &format!("{}: {}s {:?}\n", id, tm.elapsed().as_secs(), ips,);
let _ = writeln!(res, "{}: {}s {:?}", id, tm.elapsed().as_secs(), ips,);
}
}
}
@@ -952,13 +1023,17 @@ impl RendezvousServer {
Some("always-use-relay" | "aur") => {
if let Some(rs) = fds.next() {
if rs.to_uppercase() == "Y" {
unsafe { ALWAYS_USE_RELAY = true };
ALWAYS_USE_RELAY.store(true, Ordering::SeqCst);
} else {
unsafe { ALWAYS_USE_RELAY = false };
ALWAYS_USE_RELAY.store(false, Ordering::SeqCst);
}
self.tx.send(Data::RelayServers0(rs.to_owned())).ok();
} else {
res += &format!("ALWAYS_USE_RELAY: {:?}\n", unsafe { ALWAYS_USE_RELAY });
let _ = writeln!(
res,
"ALWAYS_USE_RELAY: {:?}",
ALWAYS_USE_RELAY.load(Ordering::SeqCst)
);
}
}
Some("test-geo" | "tg") => {
@@ -980,11 +1055,11 @@ impl RendezvousServer {
}
async fn handle_listener2(&self, stream: TcpStream, addr: SocketAddr) {
if addr.ip().to_string() == "127.0.0.1" {
let rs = self.clone();
let mut rs = self.clone();
if addr.ip().is_loopback() {
tokio::spawn(async move {
let mut stream = stream;
let mut buffer = [0; 64];
let mut buffer = [0; 1024];
if let Ok(Ok(n)) = timeout(1000, stream.read(&mut buffer[..])).await {
if let Ok(data) = std::str::from_utf8(&buffer[..n]) {
let res = rs.check_cmd(data).await;
@@ -999,34 +1074,31 @@ impl RendezvousServer {
let mut stream = stream;
if let Some(Ok(bytes)) = stream.next_timeout(30_000).await {
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(&bytes) {
if let Some(rendezvous_message::Union::test_nat_request(_)) = msg_in.union {
let mut msg_out = RendezvousMessage::new();
msg_out.set_test_nat_response(TestNatResponse {
port: addr.port() as _,
..Default::default()
});
stream.send(&msg_out).await.ok();
match msg_in.union {
Some(rendezvous_message::Union::TestNatRequest(_)) => {
let mut msg_out = RendezvousMessage::new();
msg_out.set_test_nat_response(TestNatResponse {
port: addr.port() as _,
..Default::default()
});
stream.send(&msg_out).await.ok();
}
Some(rendezvous_message::Union::OnlineRequest(or)) => {
allow_err!(rs.handle_online_request(&mut stream, or.peers).await);
}
_ => {}
}
}
}
});
}
async fn handle_listener(
&self,
stream: TcpStream,
addr: SocketAddr,
key: &str,
ws: bool,
) {
async fn handle_listener(&self, stream: TcpStream, addr: SocketAddr, key: &str, ws: bool) {
log::debug!("Tcp connection from {:?}, ws: {}", addr, ws);
let mut rs = self.clone();
let key = key.to_owned();
tokio::spawn(async move {
allow_err!(
rs.handle_listener_inner(stream, addr, &key, ws)
.await
);
allow_err!(rs.handle_listener_inner(stream, addr, &key, ws).await);
});
}
@@ -1034,51 +1106,58 @@ impl RendezvousServer {
async fn handle_listener_inner(
&mut self,
stream: TcpStream,
addr: SocketAddr,
mut addr: SocketAddr,
key: &str,
ws: bool,
) -> ResultType<()> {
let mut sink;
if ws {
let ws_stream = tokio_tungstenite::accept_async(stream).await?;
use tokio_tungstenite::tungstenite::handshake::server::{Request, Response};
let callback = |req: &Request, response: Response| {
let headers = req.headers();
let real_ip = headers
.get("X-Real-IP")
.or_else(|| headers.get("X-Forwarded-For"))
.and_then(|header_value| header_value.to_str().ok());
if let Some(ip) = real_ip {
if ip.contains('.') {
addr = format!("{ip}:0").parse().unwrap_or(addr);
} else {
addr = format!("[{ip}]:0").parse().unwrap_or(addr);
}
}
Ok(response)
};
let ws_stream = tokio_tungstenite::accept_hdr_async(stream, callback).await?;
let (a, mut b) = ws_stream.split();
sink = Some(Sink::Ws(a));
while let Ok(Some(Ok(msg))) = timeout(30_000, b.next()).await {
match msg {
tungstenite::Message::Binary(bytes) => {
if !self
.handle_tcp(&bytes, &mut sink, addr, key, ws)
.await
{
break;
}
if let tungstenite::Message::Binary(bytes) = msg {
if !self.handle_tcp(&bytes, &mut sink, addr, key, ws).await {
break;
}
_ => {}
}
}
} else {
let (a, mut b) = Framed::new(stream, BytesCodec::new()).split();
sink = Some(Sink::TcpStream(a));
while let Ok(Some(Ok(bytes))) = timeout(30_000, b.next()).await {
if !self
.handle_tcp(&bytes, &mut sink, addr, key, ws)
.await
{
if !self.handle_tcp(&bytes, &mut sink, addr, key, ws).await {
break;
}
}
}
if sink.is_none() {
self.tcp_punch.lock().await.remove(&addr);
self.tcp_punch.lock().await.remove(&try_into_v4(addr));
}
log::debug!("Tcp connection from {:?} closed", addr);
Ok(())
}
#[inline]
async fn get_pk(&mut self, version: &str, id: String) -> Vec<u8> {
if version.is_empty() || self.sk.is_none() {
Vec::new()
async fn get_pk(&mut self, version: &str, id: String) -> Bytes {
if version.is_empty() || self.inner.sk.is_none() {
Bytes::new()
} else {
match self.pm.get(&id).await {
Some(peer) => {
@@ -1091,16 +1170,18 @@ impl RendezvousServer {
}
.write_to_bytes()
.unwrap_or_default(),
&self.sk.as_ref().unwrap(),
self.inner.sk.as_ref().unwrap(),
)
.into()
}
_ => Vec::new(),
_ => Bytes::new(),
}
}
}
#[inline]
fn get_server_sk(&mut self, key: &str) -> String {
fn get_server_sk(key: &str) -> (String, Option<sign::SecretKey>) {
let mut out_sk = None;
let mut key = key.to_owned();
if let Ok(sk) = base64::decode(&key) {
if sk.len() == sign::SECRETKEYBYTES {
@@ -1108,25 +1189,40 @@ impl RendezvousServer {
key = base64::encode(&sk[(sign::SECRETKEYBYTES / 2)..]);
let mut tmp = [0u8; sign::SECRETKEYBYTES];
tmp[..].copy_from_slice(&sk);
self.sk = Some(sign::SecretKey(tmp));
out_sk = Some(sign::SecretKey(tmp));
}
}
if key.is_empty() || key == "-" || key == "_" {
let (pk, sk) = crate::common::gen_sk();
self.sk = sk;
let (pk, sk) = crate::common::gen_sk(0);
out_sk = sk;
if !key.is_empty() {
key = pk;
} else {
std::env::set_var("KEY_FOR_API", pk);
}
}
if !key.is_empty() {
log::info!("Key: {}", key);
std::env::set_var("KEY_FOR_API", key.clone());
}
key
(key, out_sk)
}
#[inline]
fn is_lan(&self, addr: SocketAddr) -> bool {
if let Some(network) = &self.inner.mask {
match addr {
SocketAddr::V4(v4_socket_addr) => {
return network.contains(*v4_socket_addr.ip());
}
SocketAddr::V6(v6_socket_addr) => {
if let Some(v4_addr) = v6_socket_addr.ip().to_ipv4() {
return network.contains(v4_addr);
}
}
}
}
false
}
}
@@ -1135,13 +1231,13 @@ async fn check_relay_servers(rs0: Arc<RelayServers>, tx: Sender) {
let rs = Arc::new(Mutex::new(Vec::new()));
for x in rs0.iter() {
let mut host = x.to_owned();
if !host.contains(":") {
host = format!("{}:{}", host, hbb_common::config::RELAY_PORT);
if !host.contains(':') {
host = format!("{}:{}", host, config::RELAY_PORT);
}
let rs = rs.clone();
let x = x.clone();
futs.push(tokio::spawn(async move {
if FramedStream::new(&host, "0.0.0.0:0", CHECK_RELAY_TIMEOUT)
if FramedStream::new(&host, None, CHECK_RELAY_TIMEOUT)
.await
.is_ok()
{
@@ -1151,7 +1247,7 @@ async fn check_relay_servers(rs0: Arc<RelayServers>, tx: Sender) {
}
join_all(futs).await;
log::debug!("check_relay_servers");
let rs = std::mem::replace(&mut *rs.lock().await, Default::default());
let rs = std::mem::take(&mut *rs.lock().await);
if !rs.is_empty() {
tx.send(Data::RelayServers(rs)).ok();
}
@@ -1159,7 +1255,16 @@ async fn check_relay_servers(rs0: Arc<RelayServers>, tx: Sender) {
// temp solution to solve udp socket failure
async fn test_hbbs(addr: SocketAddr) -> ResultType<()> {
let mut socket = FramedSocket::new("0.0.0.0:0").await?;
let mut addr = addr;
if addr.ip().is_unspecified() {
addr.set_ip(if addr.is_ipv4() {
IpAddr::V4(Ipv4Addr::LOCALHOST)
} else {
IpAddr::V6(Ipv6Addr::LOCALHOST)
});
}
let mut socket = FramedSocket::new(config::Config::get_any_listen_addr(addr.is_ipv4())).await?;
let mut msg_out = RendezvousMessage::new();
msg_out.set_register_peer(RegisterPeer {
id: "(:test_hbbs:)".to_owned(),
@@ -1172,8 +1277,7 @@ async fn test_hbbs(addr: SocketAddr) -> ResultType<()> {
tokio::select! {
_ = timer.tick() => {
if last_time_recv.elapsed().as_secs() > 12 {
log::error!("Timeout of test_hbbs");
std::process::exit(1);
bail!("Timeout of test_hbbs");
}
socket.send(&msg_out, addr).await?;
}
@@ -1187,13 +1291,6 @@ async fn test_hbbs(addr: SocketAddr) -> ResultType<()> {
}
}
#[inline]
fn distance(a: &(i32, i32), b: &(i32, i32)) -> i32 {
let dx = a.0 - b.0;
let dy = a.1 - b.1;
dx * dx + dy * dy
}
#[inline]
async fn send_rk_res(
socket: &mut FramedSocket,
@@ -1207,3 +1304,22 @@ async fn send_rk_res(
});
socket.send(&msg_out, addr).await
}
async fn create_udp_listener(port: i32, rmem: usize) -> ResultType<FramedSocket> {
let addr = SocketAddr::new(IpAddr::V6(Ipv6Addr::UNSPECIFIED), port as _);
if let Ok(s) = FramedSocket::new_reuse(&addr, true, rmem).await {
log::debug!("listen on udp {:?}", s.local_addr());
return Ok(s);
}
let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), port as _);
let s = FramedSocket::new_reuse(&addr, true, rmem).await?;
log::debug!("listen on udp {:?}", s.local_addr());
Ok(s)
}
#[inline]
async fn create_tcp_listener(port: i32) -> ResultType<TcpListener> {
let s = listen_any(port as _).await?;
log::debug!("listen on tcp {:?}", s.local_addr());
Ok(s)
}

170
src/utils.rs Normal file
View File

@@ -0,0 +1,170 @@
use dns_lookup::{lookup_addr, lookup_host};
use hbb_common::{bail, ResultType};
use sodiumoxide::crypto::sign;
use std::{
env,
net::{IpAddr, TcpStream},
process, str,
};
fn print_help() {
println!(
"Usage:
rustdesk-util [command]\n
Available Commands:
genkeypair Generate a new keypair
validatekeypair [public key] [secret key] Validate an existing keypair
doctor [rustdesk-server] Check for server connection problems"
);
process::exit(0x0001);
}
fn error_then_help(msg: &str) {
println!("ERROR: {msg}\n");
print_help();
}
fn gen_keypair() {
let (pk, sk) = sign::gen_keypair();
let public_key = base64::encode(pk);
let secret_key = base64::encode(sk);
println!("Public Key: {public_key}");
println!("Secret Key: {secret_key}");
}
fn validate_keypair(pk: &str, sk: &str) -> ResultType<()> {
let sk1 = base64::decode(sk);
if sk1.is_err() {
bail!("Invalid secret key");
}
let sk1 = sk1.unwrap();
let secret_key = sign::SecretKey::from_slice(sk1.as_slice());
if secret_key.is_none() {
bail!("Invalid Secret key");
}
let secret_key = secret_key.unwrap();
let pk1 = base64::decode(pk);
if pk1.is_err() {
bail!("Invalid public key");
}
let pk1 = pk1.unwrap();
let public_key = sign::PublicKey::from_slice(pk1.as_slice());
if public_key.is_none() {
bail!("Invalid Public key");
}
let public_key = public_key.unwrap();
let random_data_to_test = b"This is meh.";
let signed_data = sign::sign(random_data_to_test, &secret_key);
let verified_data = sign::verify(&signed_data, &public_key);
if verified_data.is_err() {
bail!("Key pair is INVALID");
}
let verified_data = verified_data.unwrap();
if random_data_to_test != &verified_data[..] {
bail!("Key pair is INVALID");
}
Ok(())
}
fn doctor_tcp(address: std::net::IpAddr, port: &str, desc: &str) {
let start = std::time::Instant::now();
let conn = format!("{address}:{port}");
if let Ok(_stream) = TcpStream::connect(conn.as_str()) {
let elapsed = std::time::Instant::now().duration_since(start);
println!(
"TCP Port {} ({}): OK in {} ms",
port,
desc,
elapsed.as_millis()
);
} else {
println!("TCP Port {port} ({desc}): ERROR");
}
}
fn doctor_ip(server_ip_address: std::net::IpAddr, server_address: Option<&str>) {
println!("\nChecking IP address: {server_ip_address}");
println!("Is IPV4: {}", server_ip_address.is_ipv4());
println!("Is IPV6: {}", server_ip_address.is_ipv6());
// reverse dns lookup
// TODO: (check) doesn't seem to do reverse lookup on OSX...
let reverse = lookup_addr(&server_ip_address).unwrap();
if let Some(server_address) = server_address {
if reverse == server_address {
println!("Reverse DNS lookup: '{reverse}' MATCHES server address");
} else {
println!(
"Reverse DNS lookup: '{reverse}' DOESN'T MATCH server address '{server_address}'"
);
}
}
// TODO: ICMP ping?
// port check TCP (UDP is hard to check)
doctor_tcp(server_ip_address, "21114", "API");
doctor_tcp(server_ip_address, "21115", "hbbs extra port for nat test");
doctor_tcp(server_ip_address, "21116", "hbbs");
doctor_tcp(server_ip_address, "21117", "hbbr tcp");
doctor_tcp(server_ip_address, "21118", "hbbs websocket");
doctor_tcp(server_ip_address, "21119", "hbbr websocket");
// TODO: key check
}
fn doctor(server_address_unclean: &str) {
let server_address3 = server_address_unclean.trim();
let server_address2 = server_address3.to_lowercase();
let server_address = server_address2.as_str();
println!("Checking server: {server_address}\n");
if let Ok(server_ipaddr) = server_address.parse::<IpAddr>() {
// user requested an ip address
doctor_ip(server_ipaddr, None);
} else {
// the passed string is not an ip address
let ips: Vec<std::net::IpAddr> = lookup_host(server_address).unwrap();
println!("Found {} IP addresses: ", ips.len());
ips.iter().for_each(|ip| println!(" - {ip}"));
ips.iter()
.for_each(|ip| doctor_ip(*ip, Some(server_address)));
}
}
fn main() {
let args: Vec<_> = env::args().collect();
if args.len() <= 1 {
print_help();
}
let command = args[1].to_lowercase();
match command.as_str() {
"genkeypair" => gen_keypair(),
"validatekeypair" => {
if args.len() <= 3 {
error_then_help("You must supply both the public and the secret key");
}
let res = validate_keypair(args[2].as_str(), args[3].as_str());
if let Err(e) = res {
println!("{e}");
process::exit(0x0001);
}
println!("Key pair is VALID");
}
"doctor" => {
if args.len() <= 2 {
error_then_help("You must supply the rustdesk-server address");
}
doctor(args[2].as_str());
}
_ => print_help(),
}
}

View File

@@ -1 +0,0 @@
pub const VERSION: &str = "1.1.5";

View File

@@ -0,0 +1,20 @@
[Unit]
Description=Rustdesk Relay Server
[Service]
Type=simple
LimitNOFILE=1000000
ExecStart=/usr/bin/hbbr
WorkingDirectory=/var/lib/rustdesk-server/
User=
Group=
Restart=always
StandardOutput=append:/var/log/rustdesk-server/hbbr.log
StandardError=append:/var/log/rustdesk-server/hbbr.error
# Restart service after 10 seconds if node service crashes
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,20 @@
[Unit]
Description=Rustdesk Signal Server
[Service]
Type=simple
LimitNOFILE=1000000
ExecStart=/usr/bin/hbbs
WorkingDirectory=/var/lib/rustdesk-server/
User=
Group=
Restart=always
StandardOutput=append:/var/log/rustdesk-server/hbbs.log
StandardError=append:/var/log/rustdesk-server/hbbs.error
# Restart service after 10 seconds if node service crashes
RestartSec=10
[Install]
WantedBy=multi-user.target

8
ui/.cargo/config.toml Normal file
View File

@@ -0,0 +1,8 @@
[target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]
[target.i686-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]
[target.'cfg(target_os="macos")']
rustflags = [
"-C", "link-args=-sectcreate __CGPreLoginApp __cgpreloginapp /dev/null",
]

4
ui/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
# Generated by Cargo
# will have compiled files and executables
/target/

3787
ui/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

31
ui/Cargo.toml Normal file
View File

@@ -0,0 +1,31 @@
[package]
name = "rustdesk_server"
version = "0.1.2"
description = "rustdesk server gui"
authors = ["elilchen"]
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[build-dependencies]
tauri-build = { version = "1.2", features = [] }
winres = "0.1"
[dependencies]
async-std = { version = "1.12", features = ["attributes", "unstable"] }
crossbeam-channel = "0.5"
derive-new = "0.5"
notify = "5.1"
once_cell = "1.17"
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
tauri = { version = "1.2", features = ["fs-exists", "fs-read-dir", "fs-read-file", "fs-write-file", "path-all", "shell-open", "system-tray"] }
windows-service = "0.5.0"
[features]
# by default Tauri runs in production mode
# when `tauri dev` runs it is executed with `cargo run --no-default-features` if `devPath` is an URL
default = ["custom-protocol"]
# this feature is used used for production builds where `devPath` points to the filesystem
# DO NOT remove this
custom-protocol = ["tauri/custom-protocol"]

21
ui/build.rs Normal file
View File

@@ -0,0 +1,21 @@
fn main() {
tauri_build::build();
if cfg!(target_os = "windows") {
let mut res = winres::WindowsResource::new();
res.set_icon("icons\\icon.ico");
res.set_manifest(
r#"
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
"#,
);
res.compile().unwrap();
}
}

24
ui/html/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

18
ui/html/index.html Normal file
View File

@@ -0,0 +1,18 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>RustDesk Server</title>
<link rel="icon" href="data:;base64,=">
<script>addEventListener('contextmenu', e => e.preventDefault());</script>
<script type="module" src="/main.js" defer></script>
</head>
<body style="visibility: hidden">
<textarea></textarea>
<form>
<label><input type="checkbox"> <p>Turn on auto scroll</p></label>
<label><p>Press ctrl + s to save</p></label>
</form>
</body>
</html>

159
ui/html/main.js Normal file
View File

@@ -0,0 +1,159 @@
import 'codemirror/lib/codemirror.css';
import './style.css';
import 'codemirror/mode/toml/toml.js';
import CodeMirror from 'codemirror';
const { event, fs, path, tauri } = window.__TAURI__;
class View {
constructor() {
Object.assign(this, {
content: '',
action_time: 0,
is_auto_scroll: true,
is_edit_mode: false,
is_file_changed: false,
is_form_changed: false,
is_content_changed: false
}, ...arguments);
addEventListener('DOMContentLoaded', this.init.bind(this));
}
async init() {
this.editor = this.renderEditor();
this.editor.on('scroll', this.editorScroll.bind(this));
this.editor.on('keypress', this.editorSave.bind(this));
this.form = this.renderForm();
this.form.addEventListener('change', this.formChange.bind(this));
event.listen('__update__', this.appAction.bind(this));
event.emit('__action__', '__init__');
while (true) {
let now = Date.now();
try {
await this.update();
this.render();
} catch (e) {
console.error(e);
}
await new Promise(r => setTimeout(r, Math.max(0, 33 - (Date.now() - now))));
}
}
async update() {
if (this.is_file_changed) {
this.is_file_changed = false;
let now = Date.now(),
file = await path.resolveResource(this.file);
if (await fs.exists(file)) {
let content = await fs.readTextFile(file);
if (this.action_time < now) {
this.content = content;
this.is_content_changed = true;
}
} else {
if (now >= this.action_time) {
if (this.is_edit_mode) {
this.content = `# https://github.com/rustdesk/rustdesk-server#env-variables
RUST_LOG=info
`;
}
this.is_content_changed = true;
}
console.warn(`${this.file} file is missing`);
}
}
}
async editorSave(editor, e) {
if (e.ctrlKey && e.keyCode === 19 && this.is_edit_mode && !this.locked) {
this.locked = true;
try {
let now = Date.now(),
content = this.editor.doc.getValue(),
file = await path.resolveResource(this.file);
await fs.writeTextFile(file, content);
event.emit('__action__', 'restart');
} catch (e) {
console.error(e);
} finally {
this.locked = false;
}
}
}
editorScroll(e) {
let info = this.editor.getScrollInfo(),
distance = info.height - info.top - info.clientHeight,
is_end = distance < 1;
if (this.is_auto_scroll !== is_end) {
this.is_auto_scroll = is_end;
this.is_form_changed = true;
}
}
formChange(e) {
switch (e.target.tagName.toLowerCase()) {
case 'input':
this.is_auto_scroll = e.target.checked;
break;
}
}
appAction(e) {
let [action, data] = e.payload;
switch (action) {
case 'file':
if (data === '.env') {
this.is_edit_mode = true;
this.file = `bin/${data}`;
} else {
this.is_edit_mode = false;
this.file = `logs/${data}`;
}
this.action_time = Date.now();
this.is_file_changed = true;
this.is_form_changed = true;
break;
}
}
render() {
if (this.is_form_changed) {
this.is_form_changed = false;
this.renderForm();
}
if (this.is_content_changed) {
this.is_content_changed = false;
this.renderEditor();
}
if (this.is_auto_scroll && !this.is_edit_mode) {
this.renderScrollbar();
}
}
renderForm() {
let form = this.form || document.querySelector('form'),
label = form.querySelectorAll('label'),
input = form.querySelector('input');
input.checked = this.is_auto_scroll;
if (this.is_edit_mode) {
label[0].style.display = 'none';
label[1].style.display = 'block';
} else {
label[0].style.display = 'block';
label[1].style.display = 'none';
}
return form;
}
renderEditor() {
let editor = this.editor || CodeMirror.fromTextArea(document.querySelector('textarea'), {
mode: { name: 'toml' },
lineNumbers: true,
autofocus: true
});
editor.setOption('readOnly', !this.is_edit_mode);
editor.doc.setValue(this.content);
editor.doc.clearHistory();
this.content = '';
editor.focus();
return editor;
}
renderScrollbar() {
let info = this.editor.getScrollInfo();
this.editor.scrollTo(info.left, info.height);
}
}
new View();

17
ui/html/package.json Normal file
View File

@@ -0,0 +1,17 @@
{
"name": "rustdesk_server",
"private": true,
"version": "0.1.2",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"devDependencies": {
"vite": "^4.1.0"
},
"dependencies": {
"codemirror": "v5"
}
}

35
ui/html/style.css Normal file
View File

@@ -0,0 +1,35 @@
body {
visibility: visible !important;
margin: 0;
background: #fff;
}
.CodeMirror {
height: calc(100vh - 20px);
}
form {
height: 20px;
position: fixed;
right: 0;
bottom: 0;
left: 5px;
font-size: 13px;
background: #fff;
}
form>label {
display: none;
vertical-align: middle;
}
form>label>input,
form>label>p {
height: 19px;
padding: 0;
display: inline-block;
margin: 0;
vertical-align: middle;
cursor: pointer;
user-select: none;
}

8
ui/html/vite.config.js Normal file
View File

@@ -0,0 +1,8 @@
import { defineConfig } from 'vite';
export default defineConfig({
server: {
port: '5177',
strictPort: true
}
});

BIN
ui/icons/128x128.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

BIN
ui/icons/128x128@2x.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
ui/icons/32x32.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

BIN
ui/icons/StoreLogo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

BIN
ui/icons/icon.icns Normal file

Binary file not shown.

BIN
ui/icons/icon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

BIN
ui/icons/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

178
ui/setup.nsi Normal file
View File

@@ -0,0 +1,178 @@
Unicode true
####################################################################
# Includes
!include nsDialogs.nsh
!include MUI2.nsh
!include x64.nsh
!include LogicLib.nsh
####################################################################
# File Info
!define APP_NAME "RustDeskServer"
!define PRODUCT_NAME "rustdesk_server"
!define PRODUCT_DESCRIPTION "Installer for ${PRODUCT_NAME}"
!define COPYRIGHT "Copyright © 2021"
!define VERSION "1.1.13"
VIProductVersion "${VERSION}.0"
VIAddVersionKey "ProductName" "${PRODUCT_NAME}"
VIAddVersionKey "ProductVersion" "${VERSION}"
VIAddVersionKey "FileDescription" "${PRODUCT_DESCRIPTION}"
VIAddVersionKey "LegalCopyright" "${COPYRIGHT}"
VIAddVersionKey "FileVersion" "${VERSION}"
####################################################################
# Installer Attributes
Name "${APP_NAME}"
Outfile "${APP_NAME}.Setup.exe"
Caption "Setup - ${APP_NAME}"
BrandingText "${APP_NAME}"
ShowInstDetails show
RequestExecutionLevel admin
SetOverwrite on
InstallDir "$PROGRAMFILES64\${APP_NAME}"
####################################################################
# Pages
!define MUI_ICON "icons\icon.ico"
!define MUI_ABORTWARNING
!define MUI_LANGDLL_ALLLANGUAGES
!define MUI_FINISHPAGE_SHOWREADME ""
!define MUI_FINISHPAGE_SHOWREADME_TEXT "Create Startup Shortcut"
!define MUI_FINISHPAGE_SHOWREADME_FUNCTION CreateStartupShortcut
!define MUI_FINISHPAGE_RUN "$INSTDIR\${PRODUCT_NAME}.exe"
!insertmacro MUI_PAGE_DIRECTORY
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_PAGE_FINISH
####################################################################
# Language
!insertmacro MUI_LANGUAGE "English" ; The first language is the default language
!insertmacro MUI_LANGUAGE "French"
!insertmacro MUI_LANGUAGE "German"
!insertmacro MUI_LANGUAGE "Spanish"
!insertmacro MUI_LANGUAGE "SpanishInternational"
!insertmacro MUI_LANGUAGE "SimpChinese"
!insertmacro MUI_LANGUAGE "TradChinese"
!insertmacro MUI_LANGUAGE "Japanese"
!insertmacro MUI_LANGUAGE "Korean"
!insertmacro MUI_LANGUAGE "Italian"
!insertmacro MUI_LANGUAGE "Dutch"
!insertmacro MUI_LANGUAGE "Danish"
!insertmacro MUI_LANGUAGE "Swedish"
!insertmacro MUI_LANGUAGE "Norwegian"
!insertmacro MUI_LANGUAGE "NorwegianNynorsk"
!insertmacro MUI_LANGUAGE "Finnish"
!insertmacro MUI_LANGUAGE "Greek"
!insertmacro MUI_LANGUAGE "Russian"
!insertmacro MUI_LANGUAGE "Portuguese"
!insertmacro MUI_LANGUAGE "PortugueseBR"
!insertmacro MUI_LANGUAGE "Polish"
!insertmacro MUI_LANGUAGE "Ukrainian"
!insertmacro MUI_LANGUAGE "Czech"
!insertmacro MUI_LANGUAGE "Slovak"
!insertmacro MUI_LANGUAGE "Croatian"
!insertmacro MUI_LANGUAGE "Bulgarian"
!insertmacro MUI_LANGUAGE "Hungarian"
!insertmacro MUI_LANGUAGE "Thai"
!insertmacro MUI_LANGUAGE "Romanian"
!insertmacro MUI_LANGUAGE "Latvian"
!insertmacro MUI_LANGUAGE "Macedonian"
!insertmacro MUI_LANGUAGE "Estonian"
!insertmacro MUI_LANGUAGE "Turkish"
!insertmacro MUI_LANGUAGE "Lithuanian"
!insertmacro MUI_LANGUAGE "Slovenian"
!insertmacro MUI_LANGUAGE "Serbian"
!insertmacro MUI_LANGUAGE "SerbianLatin"
!insertmacro MUI_LANGUAGE "Arabic"
!insertmacro MUI_LANGUAGE "Farsi"
!insertmacro MUI_LANGUAGE "Hebrew"
!insertmacro MUI_LANGUAGE "Indonesian"
!insertmacro MUI_LANGUAGE "Mongolian"
!insertmacro MUI_LANGUAGE "Luxembourgish"
!insertmacro MUI_LANGUAGE "Albanian"
!insertmacro MUI_LANGUAGE "Breton"
!insertmacro MUI_LANGUAGE "Belarusian"
!insertmacro MUI_LANGUAGE "Icelandic"
!insertmacro MUI_LANGUAGE "Malay"
!insertmacro MUI_LANGUAGE "Bosnian"
!insertmacro MUI_LANGUAGE "Kurdish"
!insertmacro MUI_LANGUAGE "Irish"
!insertmacro MUI_LANGUAGE "Uzbek"
!insertmacro MUI_LANGUAGE "Galician"
!insertmacro MUI_LANGUAGE "Afrikaans"
!insertmacro MUI_LANGUAGE "Catalan"
!insertmacro MUI_LANGUAGE "Esperanto"
!insertmacro MUI_LANGUAGE "Asturian"
!insertmacro MUI_LANGUAGE "Basque"
!insertmacro MUI_LANGUAGE "Pashto"
!insertmacro MUI_LANGUAGE "ScotsGaelic"
!insertmacro MUI_LANGUAGE "Georgian"
!insertmacro MUI_LANGUAGE "Vietnamese"
!insertmacro MUI_LANGUAGE "Welsh"
!insertmacro MUI_LANGUAGE "Armenian"
!insertmacro MUI_LANGUAGE "Corsican"
!insertmacro MUI_LANGUAGE "Tatar"
!insertmacro MUI_LANGUAGE "Hindi"
####################################################################
# Sections
Section "Install"
SetShellVarContext all
nsExec::Exec 'sc stop hbbr'
nsExec::Exec 'sc stop hbbs'
nsExec::Exec 'taskkill /F /IM ${PRODUCT_NAME}.exe'
Sleep 500
SetOutPath $INSTDIR
File /r "setup\*.*"
WriteUninstaller $INSTDIR\uninstall.exe
CreateDirectory "$SMPROGRAMS\${APP_NAME}"
CreateShortCut "$SMPROGRAMS\${APP_NAME}\${APP_NAME}.lnk" "$INSTDIR\${PRODUCT_NAME}.exe"
CreateShortCut "$SMPROGRAMS\${APP_NAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe"
CreateShortCut "$DESKTOP\${APP_NAME}.lnk" "$INSTDIR\${PRODUCT_NAME}.exe"
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\bin\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\bin\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\bin\hbbr.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\bin\hbbr.exe" enable=yes'
ExecWait 'powershell.exe -NoProfile -windowstyle hidden try { [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12 } catch {}; Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=2124703" -OutFile "$$env:TEMP\MicrosoftEdgeWebview2Setup.exe" ; Start-Process -FilePath "$$env:TEMP\MicrosoftEdgeWebview2Setup.exe" -ArgumentList ($\'/silent$\', $\'/install$\') -Wait'
SectionEnd
Section "Uninstall"
SetShellVarContext all
nsExec::Exec 'sc stop hbbr'
nsExec::Exec 'sc stop hbbs'
nsExec::Exec 'taskkill /F /IM ${PRODUCT_NAME}.exe'
Sleep 500
RMDir /r "$SMPROGRAMS\${APP_NAME}"
Delete "$SMSTARTUP\${APP_NAME}.lnk"
Delete "$DESKTOP\${APP_NAME}.lnk"
nsExec::Exec 'sc delete hbbr'
nsExec::Exec 'sc delete hbbs'
nsExec::Exec 'netsh advfirewall firewall delete rule name="${APP_NAME}"'
RMDir /r "$INSTDIR\bin"
RMDir /r "$INSTDIR\logs"
RMDir /r "$INSTDIR\service"
Delete "$INSTDIR\${PRODUCT_NAME}.exe"
Delete "$INSTDIR\uninstall.exe"
SectionEnd
####################################################################
# Functions
Function CreateStartupShortcut
CreateShortCut "$SMSTARTUP\${APP_NAME}.lnk" "$INSTDIR\${PRODUCT_NAME}.exe"
FunctionEnd

Some files were not shown because too many files have changed in this diff Show More