阅读视图

发现新文章,点击刷新页面。

NVIDIA-Omniverse/PhysX

NVIDIA PhysX SDK


NVIDIA PhysX

Copyright (c) 2008-2024 NVIDIA Corporation. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of NVIDIA CORPORATION nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Introduction

Welcome to the NVIDIA PhysX source code repository.

This repository contains source releases of the PhysX, Flow, and Blast SDKs used in NVIDIA Omniverse.

Documentation

The user guide and API documentation are available on GitHub Pages. Please create an Issue if you find a documentation issue.

Instructions

Please see instructions specific to each of the libraries in the respective subfolder.

Community-Maintained Build Configuration Fork

Please see the O3DE Fork for community-maintained additional build configurations.

Support

  • Please use GitHub Discussions for questions and comments.
  • GitHub Issues should only be used for bug reports or documentation issues.
  • You can also ask questions in the NVIDIA Omniverse #physics Discord Channel.

redpanda-data/redpanda

Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!


Redpanda

Documentation Slack Twitter Go C++

redpanda sitting

Redpanda is a streaming data platform for developers. Kafka® API-compatible. ZooKeeper® free. JVM free. We built it from the ground up to eliminate complexity common to Apache Kafka, improve performance by up to 10x, and make the storage architecture safer, and more resilient. The simpler devex lets you focus on your code (instead of fighting Kafka) and develop new use cases that were never before possible. The business benefits from a significantly lower total cost and faster time to market. A new platform that scales with you from the smallest projects to petabytes of data distributed across the globe!

Community

Slack is the main way the community interacts with one another in real-time :)

Github Discussion is preferred for longer, async, thoughtful discussions

GitHub Issues is reserved only for actual issues. Please use the mailing list for discussions.

Code of conduct code of conduct for the community

Contributing docs

Getting Started

Prebuilt Packages

We recommend using our free & prebuilt stable releases below.

On MacOS

Simply download our rpk binary here. We require Docker on MacOS

brew install redpanda-data/tap/redpanda && rpk container start

On Debian/Ubuntu

curl -1sLf \
  'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.deb.sh' \
  | sudo -E bash

sudo apt-get install redpanda

On Fedora/RedHat/Amazon Linux

curl -1sLf \
  'https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/cfg/setup/bash.rpm.sh' \
  | sudo -E bash

sudo yum install redpanda

On Other Linux

To install from a .tar.gz archive, download the file and extract it into /opt/redpanda.

For amd64:

curl -LO \
  https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/raw/names/redpanda-amd64/versions/23.3.6/redpanda-23.3.6-amd64.tar.gz

For arm64:

curl -LO \
  https://dl.redpanda.com/nzc4ZYQK3WRGd9sy/redpanda/raw/names/redpanda-arm64/versions/23.3.6/redpanda-23.3.6-arm64.tar.gz

Replace 23.3.6 with the appropriate version you are trying to download.

GitHub Actions

    - name: start redpanda
      uses: redpanda-data/[email protected]
      with:
        version: "latest"

Now you should be able to connect to redpanda (kafka-api) running at localhost:9092

Build Manually

We provide a very simple build system that uses your system libraries. We recommend users leverage our pre-built stable releases which are vetted, tested, and reproducible with exact versions of the entire transitive dependency graph, including exact compilers all built from source. The only thing we do not build yet is the Linux Kernel, but soon!

Currently clang 16 is required. We test the open-source build nightly using Fedora 38.

sudo ./install-dependencies.sh
cmake --preset release
cmake --build --preset release

For quicker dev setup, we provide a docker image with the toolchain installed.

Release candidate builds

We create a release candidate (RC) build when we get close to a new release and publish these to make new features available for testing. RC builds are not recommended for production use.

RC releases on Debian/Ubuntu

curl -1sLf \
  'https://dl.redpanda.com/E4xN1tVe3Xy60GTx/redpanda-unstable/setup.deb.sh' \
  | sudo -E bash

sudo apt-get install redpanda

RC releases on Fedora/RedHat/Amazon Linux

curl -1sLf \
  'https://dl.redpanda.com/E4xN1tVe3Xy60GTx/redpanda-unstable/setup.rpm.sh' \
  | sudo -E bash

sudo yum install redpanda

RC releases on Docker

This is an example with the v23.1.1-rc1 version prior to the 23.1.1 release.

docker pull docker.redpanda.com/redpandadata/redpanda-unstable:v23.1.1-rc1

facebookresearch/faiss

A library for efficient similarity search and clustering of dense vectors.


Faiss

Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed primarily at Meta's Fundamental AI Research group.

News

See CHANGELOG.md for detailed information about latest features.

Introduction

Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.

Some of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server. Other methods, like HNSW and NSG add an indexing structure on top of the raw vectors to make searching more efficient.

The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2 with GpuIndexFlatL2) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.

Installing

Faiss comes with precompiled libraries for Anaconda in Python, see faiss-cpu and faiss-gpu. The library is mostly implemented in C++, the only dependency is a BLAS implementation. Optional GPU support is provided via CUDA, and the Python interface is also optional. It compiles with cmake. See INSTALL.md for details.

How Faiss works

Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to

  • search time
  • search quality
  • memory used per index vector
  • training time
  • adding time
  • need for external data for unsupervised training

The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.

Full documentation of Faiss

The following are entry points for documentation:

Authors

The main authors of Faiss are:

  • Hervé Jégou initiated the Faiss project and wrote its first implementation
  • Matthijs Douze implemented most of the CPU Faiss
  • Jeff Johnson implemented all of the GPU Faiss
  • Lucas Hosseini implemented the binary indexes and the build system
  • Chengqi Deng implemented NSG, NNdescent and much of the additive quantization code.
  • Alexandr Guzhva many optimizations: SIMD, memory allocation and layout, fast decoding kernels for vector codecs, etc.
  • Gergely Szilvasy build system, benchmarking framework.

Reference

References to cite when you use Faiss in a research paper:

@article{douze2024faiss,
      title={The Faiss library},
      author={Matthijs Douze and Alexandr Guzhva and Chengqi Deng and Jeff Johnson and Gergely Szilvasy and Pierre-Emmanuel Mazaré and Maria Lomeli and Lucas Hosseini and Hervé Jégou},
      year={2024},
      eprint={2401.08281},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

For the GPU version of Faiss, please cite:

@article{johnson2019billion,
  title={Billion-scale similarity search with {GPUs}},
  author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
  journal={IEEE Transactions on Big Data},
  volume={7},
  number={3},
  pages={535--547},
  year={2019},
  publisher={IEEE}
}

Join the Faiss community

For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/

We monitor the issues page of the repository. You can report bugs, ask questions, etc.

Legal

Faiss is MIT-licensed, refer to the LICENSE file in the top level directory.

Copyright © Meta Platforms, Inc. See the Terms of Use and Privacy Policy for this project.

IntelRealSense/librealsense

Intel® RealSense™ SDK





License Release Commits Issues GitHub CI Forks

Overview

Intel® RealSense™ SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras.

📌 For other Intel® RealSense™ devices (F200, R200, LR200 and ZR300), please refer to the latest legacy release.

The SDK allows depth and color streaming, and provides intrinsic and extrinsic calibration information. The library also offers synthetic streams (pointcloud, depth aligned to color and vise-versa), and a built-in support for record and playback of streaming sessions.

Developer kits containing the necessary hardware to use this library are available for purchase at store.intelrealsense.com. Information about the Intel® RealSense™ technology at www.intelrealsense.com

📂 Don't have access to a RealSense camera? Check-out sample data

Update on Recent Changes to the RealSense Product Line

Intel has EOLed the LiDAR, Facial Authentication, and Tracking product lines. These products have been discontinued and will no longer be available for new orders.

Intel WILL continue to sell and support stereo products including the following: D410, D415, D430, , D401 ,D450 modules and D415, D435, D435i, D435f, D405, D455, D457 depth cameras. We will also continue the work to support and develop our LibRealSense open source SDK.

In the future, Intel and the RealSense team will focus our new development on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy.

Building librealsense - Using vcpkg

You can download and install librealsense using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install realsense2

The librealsense port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

Download and Install

  • Download - The latest releases including the Intel RealSense SDK, Viewer and Depth Quality tools are available at: latest releases. Please check the release notes for the supported platforms, new features and capabilities, known issues, how to upgrade the Firmware and more.

  • Install - You can also install or build from source the SDK (on Linux \ Windows \ Mac OS \ Android \ Docker), connect your D400 depth camera and you are ready to start writing your first application.

Support & Issues: If you need product support (e.g. ask a question about / are having problems with the device), please check the FAQ & Troubleshooting section. If not covered there, please search our Closed GitHub Issues page, Community and Support sites. If you still cannot find an answer to your question, please open a new issue.

What’s included in the SDK:

What Description Download link
Intel® RealSense™ Viewer With this application, you can quickly access your Intel® RealSense™ Depth Camera to view the depth stream, visualize point clouds, record and playback streams, configure your camera settings, modify advanced controls, enable depth visualization and post processing and much more. Intel.RealSense.Viewer.exe
Depth Quality Tool This application allows you to test the camera’s depth quality, including: standard deviation from plane fit, normalized RMS – the subpixel accuracy, distance accuracy and fill rate. You should be able to easily get and interpret several of the depth quality metrics and record and save the data for offline analysis. Depth.Quality.Tool.exe
Debug Tools Device enumeration, FW logger, etc as can be seen at the tools directory Included in Intel.RealSense.SDK.exe
Code Samples These simple examples demonstrate how to easily use the SDK to include code snippets that access the camera into your applications. Check some of the C++ examples including capture, pointcloud and more and basic C examples Included in Intel.RealSense.SDK.exe
Wrappers Python, C#/.NET API, as well as integration with the following 3rd-party technologies: ROS1, ROS2, LabVIEW, OpenCV, PCL, Unity, Matlab, OpenNI, UnrealEngine4 and more to come.

Ready to Hack!

Our library offers a high level API for using Intel RealSense depth cameras (in addition to lower level ones). The following snippet shows how to start streaming frames and extracting the depth value of a pixel:

// Create a Pipeline - this serves as a top-level API for streaming and processing frames
rs2::pipeline p;

// Configure and start the pipeline
p.start();

while (true)
{
    // Block program until frames arrive
    rs2::frameset frames = p.wait_for_frames();

    // Try to get a frame of a depth image
    rs2::depth_frame depth = frames.get_depth_frame();

    // Get the depth frame's dimensions
    float width = depth.get_width();
    float height = depth.get_height();

    // Query the distance from the camera to the object in the center of the image
    float dist_to_center = depth.get_distance(width / 2, height / 2);

    // Print the distance
    std::cout << "The camera is facing an object " << dist_to_center << " meters away \r";
}

For more information on the library, please follow our examples, and read the documentation to learn more.

Contributing

In order to contribute to Intel RealSense SDK, please follow our contribution guidelines.

License

This project is licensed under the Apache License, Version 2.0. Copyright 2018 Intel Corporation

x64dbg/x64dbg

An open-source user mode debugger for Windows. Optimized for reverse engineering and malware analysis.


x64dbg

Build status Crowdin Download x64dbg

Discord Telegram Gitter Matrix

An open-source binary debugger for Windows, aimed at malware analysis and reverse engineering of executables you do not have the source code for. There are many features available and a comprehensive plugin system to add your own. You can find more information on the blog!

Screenshots

main interface (light)

main interface (dark)

graph memory map

Installation & Usage

  1. Download a snapshot from GitHub, SourceForge or OSDN and extract it in a location your user has write access to.
  2. Optionally use x96dbg.exe to register a shell extension and add shortcuts to your desktop.
  3. You can now run x32\x32dbg.exe if you want to debug a 32-bit executable or x64\x64dbg.exe to debug a 64-bit executable! If you are unsure you can always run x96dbg.exe and choose your architecture there.

You can also compile x64dbg yourself with a few easy steps!

Sponsors


Contributing

This is a community effort and we accept pull requests! See the CONTRIBUTING document for more information. If you have any questions you can always contact us or open an issue. You can take a look at the good first issues to get started.

Credits

Developers

Code contributions

You can find an exhaustive list of GitHub contributors here.

Special Thanks

Without the help of many people and other open-source projects, it would not have been possible to make x64dbg what it is today, thank you!

luau-lang/luau

A fast, small, safe, gradually typed embeddable scripting language derived from Lua


Luau CI codecov

Luau (lowercase u, /ˈlu.aʊ/) is a fast, small, safe, gradually typed embeddable scripting language derived from Lua.

It is designed to be backwards compatible with Lua 5.1, as well as incorporating some features from future Lua releases, but also expands the feature set (most notably with type annotations). Luau is largely implemented from scratch, with the language runtime being a very heavily modified version of Lua 5.1 runtime, with completely rewritten interpreter and other performance innovations. The runtime mostly preserves Lua 5.1 API, so existing bindings should be more or less compatible with a few caveats.

Luau is used by Roblox game developers to write game code, as well as by Roblox engineers to implement large parts of the user-facing application code as well as portions of the editor (Roblox Studio) as plugins. Roblox chose to open-source Luau to foster collaboration within the Roblox community as well as to allow other companies and communities to benefit from the ongoing language and runtime innovation. As a consequence, Luau is now also used by games like Alan Wake 2 and Warframe.

This repository hosts source code for the language implementation and associated tooling. Documentation for the language is available at https://luau-lang.org/ and accepts contributions via site repository; the language is evolved through RFCs that are located in rfcs repository.

Usage

Luau is an embeddable language, but it also comes with two command-line tools by default, luau and luau-analyze.

luau is a command-line REPL and can also run input files. Note that REPL runs in a sandboxed environment and as such doesn't have access to the underlying file system except for ability to require modules.

luau-analyze is a command-line type checker and linter; given a set of input files, it produces errors/warnings according to the file configuration, which can be customized by using --! comments in the files or .luaurc files. For details please refer to type checking and linting documentation.

Installation

You can install and run Luau by downloading the compiled binaries from a recent release; note that luau and luau-analyze binaries from the archives will need to be added to PATH or copied to a directory like /usr/local/bin on Linux/macOS.

Alternatively, you can use one of the packaged distributions (note that these are not maintained by Luau development team):

  • macOS: Install Homebrew and run brew install luau
  • Arch Linux: From the AUR (Arch Linux User Repository), install one of these packages via a AUR helper or manually (by cloning their repo and using makepkg): luau (manual build), luau-git (manual build by cloning this repo), or luau-bin (pre-built binaries from releases)
  • Alpine Linux: Enable community repositories and run apk add luau
  • Gentoo Linux: Luau is officially packaged by Gentoo and can be installed using emerge dev-lang/luau. You may have to unmask the package first before installing it (which can be done by including the --autounmask=y option in the emerge command).

After installing, you will want to validate the installation was successful by running the test case here.

Building

On all platforms, you can use CMake to run the following commands to build Luau binaries from source:

mkdir cmake && cd cmake
cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake --build . --target Luau.Repl.CLI --config RelWithDebInfo
cmake --build . --target Luau.Analyze.CLI --config RelWithDebInfo

Alternatively, on Linux/macOS you can use make:

make config=release luau luau-analyze

To integrate Luau into your CMake application projects as a library, at the minimum you'll need to depend on Luau.Compiler and Luau.VM projects. From there you need to create a new Luau state (using Lua 5.x API such as lua_newstate), compile source to bytecode and load it into the VM like this:

// needs lua.h and luacode.h
size_t bytecodeSize = 0;
char* bytecode = luau_compile(source, strlen(source), NULL, &bytecodeSize);
int result = luau_load(L, chunkname, bytecode, bytecodeSize, 0);
free(bytecode);

if (result == 0)
    return 1; /* return chunk main function */

For more details about the use of host API you currently need to consult Lua 5.x API. Luau closely tracks that API but has a few deviations, such as the need to compile source separately (which is important to be able to deploy VM without a compiler), or lack of __gc support (use lua_newuserdatadtor instead).

To gain advantage of many performance improvements it's highly recommended to use safeenv feature, which sandboxes individual scripts' global tables from each other as well as protects builtin libraries from monkey-patching. For this to work you need to call luaL_sandbox for the global state and luaL_sandboxthread for each new script's execution thread.

Testing

Luau has an internal test suite; in CMake builds it is split into two targets, Luau.UnitTest (for bytecode compiler and type checker/linter tests) and Luau.Conformance (for VM tests). The unit tests are written in C++, whereas the conformance tests are largely written in Luau (see tests/conformance).

Makefile builds combine both into a single target and can be ran via make test.

Dependencies

Luau uses C++ as its implementation language. The runtime requires C++11, whereas the compiler and analysis components require C++17. It should build without issues using Microsoft Visual Studio 2017 or later, or gcc-7 or clang-7 or later.

Other than the STL/CRT, Luau library components don't have external dependencies. The test suite depends on doctest testing framework, and the REPL command-line depends on isocline.

License

Luau implementation is distributed under the terms of MIT License. It is based on Lua 5.x implementation that is MIT licensed as well.

When Luau is integrated into external projects, we ask to honor the license agreement and include Luau attribution into the user-facing product documentation. The attribution using Luau logo is also encouraged.

k2-fsa/sherpa-onnx

Speech-to-text, text-to-speech, and speaker recognition using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift, Dart, JavaScript, Flutter


Supported functions

Speech recognition Speech synthesis Speaker verification Speaker identification
✔️ ✔️ ✔️ ✔️
Spoken Language identification Audio tagging Voice activity detection Keyword spotting
✔️ ✔️ ✔️ ✔️

Supported platforms

Architecture Android iOS Windows macOS linux
x64 ✔️ ✔️ ✔️ ✔️
x86 ✔️ ✔️
arm64 ✔️ ✔️ ✔️ ✔️ ✔️
arm32 ✔️ ✔️
riscv64 ✔️

Supported programming languages

C++ C Python C# Java JavaScript Kotlin Swift Go Dart
✔️ ✔️ ✔️ ✔️ ✔️ ✔️ ✔️ ✔️ ✔️ ✔️

It also supports WebAssembly.

Introduction

This repository supports running the following functions locally

  • Speech-to-text (i.e., ASR); both streaming and non-streaming are supported
  • Text-to-speech (i.e., TTS)
  • Speaker identification
  • Speaker verification
  • Spoken language identification
  • Audio tagging
  • VAD (e.g., silero-vad)
  • Keyword spotting

on the following platforms and operating systems:

with the following APIs

  • C++, C, Python, Go, C#
  • Java, Kotlin, JavaScript
  • Swift
  • Dart

Links for pre-built Android APKs

Description URL 中国用户
Streaming speech recognition Address 点此
Text-to-speech Address 点此
Voice activity detection (VAD) Address 点此
VAD + non-streaming speech recognition Address 点此
Two-pass speech recognition Address 点此
Audio tagging Address 点此
Audio tagging (WearOS) Address 点此
Speaker identification Address 点此
Spoken language identification Address 点此
Keyword spotting Address 点此

Links for pre-built Flutter APPs

Description URL 中国用户
Streaming speech recognition Address 点此

Links for pre-trained models

Description URL
Speech recognition (speech to text, ASR) Address
Text-to-speech (TTS) Address
VAD Address
Keyword spotting Address
Audio tagging Address
Speaker identification (Speaker ID) Address
Spoken language identification (Language ID) See multi-lingual Whisper ASR models from Speech recognition
Punctuation Address

Useful links

How to reach us

Please see https://k2-fsa.github.io/sherpa/social-groups.html for 新一代 Kaldi 微信交流群 and QQ 交流群.

leejet/stable-diffusion.cpp

Stable Diffusion in pure C/C++


stable-diffusion.cpp

Inference of Stable Diffusion in pure C/C++

Features

  • Plain C/C++ implementation based on ggml, working in the same way as llama.cpp

  • Super lightweight and without external dependencies

  • SD1.x, SD2.x and SDXL support

    • !!!The VAE in SDXL encounters NaN issues under FP16, but unfortunately, the ggml_conv_2d only operates under FP16. Hence, a parameter is needed to specify the VAE that has fixed the FP16 NaN issue. You can find it here: SDXL VAE FP16 Fix.
  • SD-Turbo and SDXL-Turbo support

  • PhotoMaker support.

  • 16-bit, 32-bit float support

  • 4-bit, 5-bit and 8-bit integer quantization support

  • Accelerated memory-efficient CPU inference

    • Only requires ~2.3GB when using txt2img with fp16 precision to generate a 512x512 image, enabling Flash Attention just requires ~1.8GB.
  • AVX, AVX2 and AVX512 support for x86 architectures

  • Full CUDA and Metal backend for GPU acceleration.

  • Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs models

    • No need to convert to .ggml or .gguf anymore!
  • Flash Attention for memory usage optimization (only cpu for now)

  • Original txt2img and img2img mode

  • Negative prompt

  • stable-diffusion-webui style tokenizer (not all the features, only token weighting for now)

  • LoRA support, same as stable-diffusion-webui

  • Latent Consistency Models support (LCM/LCM-LoRA)

  • Faster and memory efficient latent decoding with TAESD

  • Upscale images generated with ESRGAN

  • VAE tiling processing for reduce memory usage

  • Control Net support with SD 1.5

  • Sampling method

  • Cross-platform reproducibility (--rng cuda, consistent with the stable-diffusion-webui GPU RNG)

  • Embedds generation parameters into png output as webui-compatible text string

  • Supported platforms

    • Linux
    • Mac OS
    • Windows
    • Android (via Termux)

TODO

  • More sampling methods
  • Make inference faster
    • The current implementation of ggml_conv_2d is slow and has high memory usage
  • Continuing to reduce memory usage (quantizing the weights of ggml_conv_2d)
  • Implement Inpainting support
  • k-quants support

Usage

For most users, you can download the built executable program from the latest release. If the built product does not meet your requirements, you can choose to build it manually.

Get the Code

git clone --recursive https://github.com/leejet/stable-diffusion.cpp
cd stable-diffusion.cpp
  • If you have already cloned the repository, you can use the following command to update the repository to the latest code.
cd stable-diffusion.cpp
git pull origin master
git submodule init
git submodule update

Download weights

Build

Build from scratch

mkdir build
cd build
cmake ..
cmake --build . --config Release
Using OpenBLAS
cmake .. -DGGML_OPENBLAS=ON
cmake --build . --config Release
Using CUBLAS

This provides BLAS acceleration using the CUDA cores of your Nvidia GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager (e.g. apt install nvidia-cuda-toolkit) or from here: CUDA Toolkit. Recommended to have at least 4 GB of VRAM.

cmake .. -DSD_CUBLAS=ON
cmake --build . --config Release
Using HipBLAS

This provides BLAS acceleration using the ROCm cores of your AMD GPU. Make sure to have the ROCm toolkit installed.

Windows User Refer to docs/hipBLAS_on_Windows.md for a comprehensive guide.

cmake .. -G "Ninja" -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DSD_HIPBLAS=ON -DCMAKE_BUILD_TYPE=Release -DAMDGPU_TARGETS=gfx1100
cmake --build . --config Release
Using Metal

Using Metal makes the computation run on the GPU. Currently, there are some issues with Metal when performing operations on very large matrices, making it highly inefficient at the moment. Performance improvements are expected in the near future.

cmake .. -DSD_METAL=ON
cmake --build . --config Release
Using Flash Attention

Enabling flash attention reduces memory usage by at least 400 MB. At the moment, it is not supported when CUBLAS is enabled because the kernel implementation is missing.

cmake .. -DSD_FLASH_ATTN=ON
cmake --build . --config Release

Run

usage: ./bin/sd [arguments]

arguments:
  -h, --help                         show this help message and exit
  -M, --mode [MODEL]                 run mode (txt2img or img2img or convert, default: txt2img)
  -t, --threads N                    number of threads to use during computation (default: -1).
                                     If threads <= 0, then threads will be set to the number of CPU physical cores
  -m, --model [MODEL]                path to model
  --vae [VAE]                        path to vae
  --taesd [TAESD_PATH]               path to taesd. Using Tiny AutoEncoder for fast decoding (low quality)
  --control-net [CONTROL_PATH]       path to control net model
  --embd-dir [EMBEDDING_PATH]        path to embeddings.
  --stacked-id-embd-dir [DIR]        path to PHOTOMAKER stacked id embeddings.
  --input-id-images-dir [DIR]        path to PHOTOMAKER input id images dir.
  --normalize-input                  normalize PHOTOMAKER input id images
  --upscale-model [ESRGAN_PATH]      path to esrgan model. Upscale images after generate, just RealESRGAN_x4plus_anime_6B supported by now.
  --upscale-repeats                  Run the ESRGAN upscaler this many times (default 1)
  --type [TYPE]                      weight type (f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0)
                                     If not specified, the default is the type of the weight file.
  --lora-model-dir [DIR]             lora model directory
  -i, --init-img [IMAGE]             path to the input image, required by img2img
  --control-image [IMAGE]            path to image condition, control net
  -o, --output OUTPUT                path to write result image to (default: ./output.png)
  -p, --prompt [PROMPT]              the prompt to render
  -n, --negative-prompt PROMPT       the negative prompt (default: "")
  --cfg-scale SCALE                  unconditional guidance scale: (default: 7.0)
  --strength STRENGTH                strength for noising/unnoising (default: 0.75)
  --style-ratio STYLE-RATIO          strength for keeping input identity (default: 20%)
  --control-strength STRENGTH        strength to apply Control Net (default: 0.9)
                                     1.0 corresponds to full destruction of information in init image
  -H, --height H                     image height, in pixel space (default: 512)
  -W, --width W                      image width, in pixel space (default: 512)
  --sampling-method {euler, euler_a, heun, dpm2, dpm++2s_a, dpm++2m, dpm++2mv2, lcm}
                                     sampling method (default: "euler_a")
  --steps  STEPS                     number of sample steps (default: 20)
  --rng {std_default, cuda}          RNG (default: cuda)
  -s SEED, --seed SEED               RNG seed (default: 42, use random seed for < 0)
  -b, --batch-count COUNT            number of images to generate.
  --schedule {discrete, karras, ays} Denoiser sigma schedule (default: discrete)
  --clip-skip N                      ignore last layers of CLIP network; 1 ignores none, 2 ignores one layer (default: -1)
                                     <= 0 represents unspecified, will be 1 for SD1.x, 2 for SD2.x
  --vae-tiling                       process vae in tiles to reduce memory usage
  --control-net-cpu                  keep controlnet in cpu (for low vram)
  --canny                            apply canny preprocessor (edge detection)
  --color                            colors the logging tags according to level
  -v, --verbose                      print extra info

Quantization

You can specify the model weight type using the --type parameter. The weights are automatically converted when loading the model.

  • f16 for 16-bit floating-point
  • f32 for 32-bit floating-point
  • q8_0 for 8-bit integer quantization
  • q5_0 or q5_1 for 5-bit integer quantization
  • q4_0 or q4_1 for 4-bit integer quantization

Convert to GGUF

You can also convert weights in the formats ckpt/safetensors/diffusers to gguf and perform quantization in advance, avoiding the need for quantization every time you load them.

For example:

./bin/sd -M convert -m ../models/v1-5-pruned-emaonly.safetensors -o  ../models/v1-5-pruned-emaonly.q8_0.gguf -v --type q8_0

txt2img example

./bin/sd -m ../models/sd-v1-4.ckpt -p "a lovely cat"
# ./bin/sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat"
# ./bin/sd -m ../models/sd_xl_base_1.0.safetensors --vae ../models/sdxl_vae-fp16-fix.safetensors -H 1024 -W 1024 -p "a lovely cat" -v

Using formats of different precisions will yield results of varying quality.

f32 f16 q8_0 q5_0 q5_1 q4_0 q4_1

img2img example

  • ./output.png is the image generated from the above txt2img pipeline
./bin/sd --mode img2img -m ../models/sd-v1-4.ckpt -p "cat with blue eyes" -i ./output.png -o ./img2img_output.png --strength 0.4

with LoRA

  • You can specify the directory where the lora weights are stored via --lora-model-dir. If not specified, the default is the current working directory.

  • LoRA is specified via prompt, just like stable-diffusion-webui.

Here's a simple example:

./bin/sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat<lora:marblesh:1>" --lora-model-dir ../models

../models/marblesh.safetensors or ../models/marblesh.ckpt will be applied to the model

LCM/LCM-LoRA

  • Download LCM-LoRA form https://huggingface.co/latent-consistency/lcm-lora-sdv1-5
  • Specify LCM-LoRA by adding <lora:lcm-lora-sdv1-5:1> to prompt
  • It's advisable to set --cfg-scale to 1.0 instead of the default 7.0. For --steps, a range of 2-8 steps is recommended. For --sampling-method, lcm/euler_a is recommended.

Here's a simple example:

./bin/sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat<lora:lcm-lora-sdv1-5:1>" --steps 4 --lora-model-dir ../models -v --cfg-scale 1
without LCM-LoRA (--cfg-scale 7) with LCM-LoRA (--cfg-scale 1)

Using TAESD to faster decoding

You can use TAESD to accelerate the decoding of latent images by following these steps:

Or curl

curl -L -O https://huggingface.co/madebyollin/taesd/blob/main/diffusion_pytorch_model.safetensors
  • Specify the model path using the --taesd PATH parameter. example:
sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat" --taesd ../models/diffusion_pytorch_model.safetensors

Using ESRGAN to upscale results

You can use ESRGAN to upscale the generated images. At the moment, only the RealESRGAN_x4plus_anime_6B.pth model is supported. Support for more models of this architecture will be added soon.

  • Specify the model path using the --upscale-model PATH parameter. example:
sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat" --upscale-model ../models/RealESRGAN_x4plus_anime_6B.pth

Using PhotoMaker to personalize image generation

You can use PhotoMaker to personalize generated images with your own ID.

NOTE, currently PhotoMaker ONLY works with SDXL (any SDXL model files will work).

Download PhotoMaker model file (in safetensor format) here. The official release of the model file (in .bin format) does not work with stablediffusion.cpp.

  • Specify the PhotoMaker model path using the --stacked-id-embd-dir PATH parameter.
  • Specify the input images path using the --input-id-images-dir PATH parameter.
    • input images must have the same width and height for preprocessing (to be improved)

In prompt, make sure you have a class word followed by the trigger word "img" (hard-coded for now). The class word could be one of "man, woman, girl, boy". If input ID images contain asian faces, add Asian before the class word.

Another PhotoMaker specific parameter:

  • --style-ratio (0-100)%: default is 20 and 10-20 typically gets good results. Lower ratio means more faithfully following input ID (not necessarily better quality).

Other parameters recommended for running Photomaker:

  • --cfg-scale 5.0
  • -H 1024
  • -W 1024

If on low memory GPUs (<= 8GB), recommend running with --vae-on-cpu option to get artifact free images.

Example:

bin/sd -m ../models/sdxlUnstableDiffusers_v11.safetensors  --vae ../models/sdxl_vae.safetensors --stacked-id-embd-dir ../models/photomaker-v1.safetensors --input-id-images-dir ../assets/examples/scarletthead_woman -p "a girl img, retro futurism, retro game art style but extremely beautiful, intricate details, masterpiece, best quality, space-themed, cosmic, celestial, stars, galaxies, nebulas, planets, science fiction, highly detailed" -n "realistic, photo-realistic, worst quality, greyscale, bad anatomy, bad hands, error, text" --cfg-scale 5.0  --sampling-method euler -H 1024 -W 1024 --style-ratio 10 --vae-on-cpu -o output.png

Docker

Building using Docker

docker build -t sd .

Run

docker run -v /path/to/models:/models -v /path/to/output/:/output sd [args...]
# For example
# docker run -v ./models:/models -v ./build:/output sd -m /models/sd-v1-4.ckpt -p "a lovely cat" -v -o /output/output.png

Memory Requirements

precision f32 f16 q8_0 q5_0 q5_1 q4_0 q4_1
Memory (txt2img - 512 x 512) ~2.8G ~2.3G ~2.1G ~2.0G ~2.0G ~2.0G ~2.0G
Memory (txt2img - 512 x 512) with Flash Attention ~2.4G ~1.9G ~1.6G ~1.5G ~1.5G ~1.5G ~1.5G

Bindings

These projects wrap stable-diffusion.cpp for easier use in other languages/frameworks.

UIs

These projects use stable-diffusion.cpp as a backend for their image generation.

Contributors

Thank you to all the people who have already contributed to stable-diffusion.cpp!

Contributors

References

stenzek/duckstation

Fast PlayStation 1 emulator for x86-64/AArch32/AArch64/RV64


DuckStation - PlayStation 1, aka. PSX Emulator

Features | Downloading and Running | Building | Disclaimers

Latest Builds for Windows 10/11 (x64/ARM64), Linux (AppImage/Flatpak), and macOS (11.0+ Universal): https://github.com/stenzek/duckstation/releases/tag/latest

Game Compatibility List: https://docs.google.com/spreadsheets/d/e/2PACX-1vRE0jjiK_aldpICoy5kVQlpk2f81Vo6P4p9vfg4d7YoTOoDlH4PQHoXjTD2F7SdN8SSBLoEAItaIqQo/pubhtml

Discord Server: https://www.duckstation.org/discord.html

DuckStation is an simulator/emulator of the Sony PlayStation(TM) console, focusing on playability, speed, and long-term maintainability. The goal is to be as accurate as possible while maintaining performance suitable for low-end devices. "Hack" options are discouraged, the default configuration should support all playable games with only some of the enhancements having compatibility issues.

A "BIOS" ROM image is required to to start the emulator and to play games. You can use an image from any hardware version or region, although mismatching game regions and BIOS regions may have compatibility issues. A ROM image is not provided with the emulator for legal reasons, you should dump this from your own console using Caetla or other means.

Features

DuckStation features a fully-featured frontend built using Qt, as well as a fullscreen/TV UI based on Dear ImGui.

Main Window Screenshot Fullscreen UI Screenshot

Other features include:

  • CPU Recompiler/JIT (x86-64, armv7/AArch32, AArch64, RISC-V/RV64).
  • Hardware (D3D11, D3D12, OpenGL, Vulkan, Metal) and software rendering.
  • Upscaling, texture filtering, and true colour (24-bit) in hardware renderers.
  • PGXP for geometry precision, texture correction, and depth buffer emulation.
  • Adaptive downsampling filter.
  • Post processing shader chains (GLSL and experimental Reshade FX).
  • "Fast boot" for skipping BIOS splash/intro.
  • Save state support.
  • Windows, Linux, macOS support.
  • Supports bin/cue images, raw bin/img files, MAME CHD, single-track ECM, MDS/MDF, and unencrypted PBP formats.
  • Direct booting of homebrew executables.
  • Direct loading of Portable Sound Format (psf) files.
  • Digital and analog controllers for input (rumble is forwarded to host).
  • Namco GunCon lightgun support (simulated with mouse).
  • NeGcon support.
  • Qt and "Big Picture" UI.
  • Automatic updates with preview and latest channels.
  • Automatic content scanning - game titles/hashes are provided by redump.org.
  • Optional automatic switching of memory cards for each game.
  • Supports loading cheats from existing lists.
  • Memory card editor and save importer.
  • Emulated CPU overclocking.
  • Integrated and remote debugging.
  • Multitap controllers (up to 8 devices).
  • RetroAchievements.
  • Automatic loading/applying of PPF patches.

System Requirements

  • A CPU faster than a potato. But it needs to be x86_64, AArch32/armv7, AArch64/ARMv8, or RISC-V/RV64.
  • For the hardware renderers, a GPU capable of OpenGL 3.1/OpenGL ES 3.1/Direct3D 11 Feature Level 10.0 (or Vulkan 1.0) and above. So, basically anything made in the last 10 years or so.
  • SDL, XInput or DInput compatible game controller (e.g. XB360/XBOne/XBSeries). DualShock 3 users on Windows will need to install the official DualShock 3 drivers included as part of PlayStation Now.

Downloading and running

Binaries of DuckStation for Windows x64/ARM64, Linux x86_64 (in AppImage/Flatpak formats), and macOS Universal Binaries are available via GitHub Releases and are automatically built with every commit/push. Binaries or packages distributed through other sources may be out of date and are not supported by the developer, please speak to them for support, not us.

Windows

DuckStation requires Windows 10/11, specifically version 1809 or newer. If you are still using Windows 7/8/8.1, DuckStation will not run on your operating system. Running these operating systems in 2023 should be considered a security risk, and I would recommend updating to something which receives vendor support. If you must use an older operating system, v0.1-5624 is the last version which will run. But do not expect to recieve any assistance, these builds are no longer supported.

To download:

Once downloaded and extracted, you can launch the emulator with duckstation-qt-x64-ReleaseLTCG.exe. Follow the Setup Wizard to get started.

If you get an error about vcruntime140_1.dll being missing, you will need to update your Visual C++ runtime. You can do that from this page: https://support.microsoft.com/en-au/help/2977003/the-latest-supported-visual-c-downloads. Specifically, you want the x64 runtime, which can be downloaded from https://aka.ms/vs/17/release/vc_redist.x64.exe.

Linux

The only supported version of DuckStation for Linux are the AppImage and Flatpak in the releases page. If you installed DuckStation from another source or distribution (e.g. EmuDeck), you should contact the packager for support, we have no control over it.

The release on Flathub is official, and synchronized with the latest rolling/stable release on GitHub.

You should not install DuckStation from unofficial repositories such as the AUR, they are known to be broken.

AppImage

The AppImages require a distribution equivalent to Ubuntu 22.04 or newer to run.

Flatpak

or, if you have FlatHub set up:

  • Run flatpak install org.duckstation.DuckStation.

Use flatpak run org.duckstation.DuckStation to start, or select DuckStation in the launcher of your desktop environment. Follow the Setup Wizard to get started.

macOS

Universal MacOS builds are provided for both x64 and ARM64 (Apple Silicon).

MacOS Big Sir (11.0) is required, as this is also the minimum requirement for Qt.

To download:

Android

You will need a device with armv7 (32-bit ARM), AArch64 (64-bit ARM), or x86_64 (64-bit x86). 64-bit is preferred, the requirements are higher for 32-bit, you'll probably want at least a 1.5GHz CPU.

Download from Google Play: https://play.google.com/store/apps/details?id=com.github.stenzek.duckstation APK and Beta Downloads: https://www.duckstation.org/android/

No support is provided for the Android app, it is free and your expectations should be in line with that. Please do not email me about issues about it, or ask for help, you will be ignored.

To use:

  1. Install and run the app for the first time.
  2. Follow the setup wizard.

If you have an external controller, you will need to map the buttons and sticks in settings.

LibCrypt protection and SBI files

A number of PAL region games use LibCrypt protection, requiring additional CD subchannel information to run properly. libcrypt not functioning usually manifests as hanging or crashing, but can sometimes affect gameplay too, depending on how the game implemented it.

For these games, make sure that the CD image and its corresponding SBI (.sbi) file have the same name and are placed in the same directory. DuckStation will automatically load the SBI file when it is found next to the CD image.

For example, if your disc image was named Spyro3.cue, you would place the SBI file in the same directory, and name it Spyro3.sbi.

CHD images with built-in subchannel information are also supported.

Building

Windows

Requirements:

  • Visual Studio 2022
  1. Clone the respository: git clone https://github.com/stenzek/duckstation.git.
  2. Download the dependencies pack from https://github.com/stenzek/duckstation-ext-qt-minimal/releases/download/latest/deps-x64.7z, and extract it to dep\msvc.
  3. Open the Visual Studio solution duckstation.sln in the root, or "Open Folder" for cmake build.
  4. Build solution.
  5. Binaries are located in bin/x64.
  6. Run duckstation-qt-x64-Release.exe or whichever config you used.

Linux

Required Dependencies

Ubuntu/Debian package names:

build-essential clang cmake curl extra-cmake-modules git libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdecor-0-dev libegl-dev libevdev-dev libfontconfig-dev libfreetype-dev libgtk-3-dev libgudev-1.0-dev libharfbuzz-dev libinput-dev libopengl-dev libpipewire-0.3-dev libpulse-dev libssl-dev libudev-dev libwayland-dev libx11-dev libx11-xcb-dev libxcb1-dev libxcb-composite0-dev libxcb-cursor-dev libxcb-damage0-dev libxcb-glx0-dev libxcb-icccm4-dev libxcb-image0-dev libxcb-keysyms1-dev libxcb-present-dev libxcb-randr0-dev libxcb-render0-dev libxcb-render-util0-dev libxcb-shape0-dev libxcb-shm0-dev libxcb-sync-dev libxcb-util-dev libxcb-xfixes0-dev libxcb-xinput-dev libxcb-xkb-dev libxext-dev libxkbcommon-x11-dev libxrandr-dev lld llvm ninja-build pkg-config zlib1g-dev

Fedora package names:

alsa-lib-devel brotli-devel clang cmake dbus-devel egl-wayland-devel extra-cmake-modules fontconfig-devel gcc-c++ gtk3-devel libcurl-devel libdecor-devel libevdev-devel libICE-devel libinput-devel libSM-devel libX11-devel libXau-devel libxcb-devel libXcomposite-devel libXcursor-devel libXext-devel libXfixes-devel libXft-devel libXi-devel libxkbcommon-devel libxkbcommon-x11-devel libXpresent-devel libXrandr-devel libXrender-devel lld llvm make mesa-libEGL-devel mesa-libGL-devel ninja-build openssl-devel patch pcre2-devel perl-Digest-SHA pipewire-devel pulseaudio-libs-devel systemd-devel wayland-devel xcb-util-cursor-devel xcb-util-devel xcb-util-errors-devel xcb-util-image-devel xcb-util-keysyms-devel xcb-util-renderutil-devel xcb-util-wm-devel xcb-util-xrm-devel zlib-devel

Building

  1. Clone the repository: git clone https://github.com/stenzek/duckstation.git, cd duckstation.
  2. Build dependencies. You can save these outside of the tree if you like. This will take a while. scripts/build-dependencies-linux.sh deps.
  3. Run CMake to configure the build system. Assuming a build subdirectory of build-release, run cmake -B build-release -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_EXE_LINKER_FLAGS_INIT="-fuse-ld=lld" -DCMAKE_MODULE_LINKER_FLAGS_INIT="-fuse-ld=lld" -DCMAKE_SHARED_LINKER_FLAGS_INIT="-fuse-ld=lld" -DCMAKE_PREFIX_PATH="$PWD/deps" -G Ninja. If you want a release (optimized) build, include -DCMAKE_BUILD_TYPE=Release -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON.
  4. Compile the source code. For the example above, run ninja -C build-release
  5. Run the binary, located in the build directory under ./build-release/bin/duckstation-qt.

macOS

Requirements:

  • CMake
  • Xcode
  1. Clone the repository: git clone https://github.com/stenzek/duckstation.git.
  2. Build the dependencies. This will take a while. scripts/build-dependencies-mac.sh deps.
  3. Run CMake to configure the build system: cmake -Bbuild-release -DCMAKE_BUILD_TYPE=Release -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON -DCMAKE_PREFIX_PATH="$PWD/deps".
  4. Compile the source code: cmake --build build-release --parallel.
  5. Run the binary, located in the build directory under bin/DuckStation.app.

User Directories

The "User Directory" is where you should place your BIOS images, where settings are saved to, and memory cards/save states are saved by default. An optional SDL game controller database file can be also placed here.

This is located in the following places depending on the platform you're using:

  • Windows: My Documents\DuckStation
  • Linux: $XDG_DATA_HOME/duckstation, or ~/.local/share/duckstation.
  • macOS: ~/Library/Application Support/DuckStation.

So, if you were using Linux, you would place your BIOS images in ~/.local/share/duckstation/bios. This directory will be created upon running DuckStation for the first time.

If you wish to use a "portable" build, where the user directory is the same as where the executable is located, create an empty file named portable.txt in the same directory as the DuckStation executable.

Bindings for Qt frontend

Your keyboard or game controller can be used to simulate a variety of PlayStation controllers. Controller input is supported through DInput, XInput, and SDL backends and can be changed through Settings -> General Settings.

To bind your input device, go to Settings -> Controllers. Each of the buttons/axes for the simulated controller will be listed, alongside the corresponding key/button on your device that it is currently bound to. To rebind, click the box next to the button/axis name, and press the key or button on your input device that you wish to bind to. When binding rumble, simply press any button on the controller you wish to send rumble to.

SDL Game Controller Database

DuckStation releases ship with a database of game controller mappings for the SDL controller backend, courtesy of https://github.com/gabomdq/SDL_GameControllerDB. The included gamecontrollerdb.txt file can be found in the database subdirectory of the DuckStation program directory.

If you are experiencing issues binding your controller with the SDL controller backend, you may need to add a custom mapping to the database file. Make a copy of gamecontrollerdb.txt and place it in your user directory (or directly in the program directory, if running in portable mode) and then follow the instructions in the SDL_GameControllerDB repository for creating a new mapping. Add this mapping to the new copy of gamecontrollerdb.txt and your controller should then be recognized properly.

Default bindings

Controller 1:

  • Left Stick: W/A/S/D
  • Right Stick: T/F/G/H
  • D-Pad: Up/Left/Down/Right
  • Triangle/Square/Circle/Cross: I/J/L/K
  • L1/R1: Q/E
  • L2/R2: 1/3
  • L3/R3: 2/4
  • Start: Enter
  • Select: Backspace

Hotkeys:

  • Escape: Open Pause Menu
  • F11: Toggle Fullscreen
  • Tab: Temporarily Disable Speed Limiter
  • Space: Pause/Resume Emulation

Disclaimers

Icon by icons8: https://icons8.com/icon/74847/platforms.undefined.short-title

"PlayStation" and "PSX" are registered trademarks of Sony Interactive Entertainment Europe Limited. This project is not affiliated in any way with Sony Interactive Entertainment.

CrowCpp/Crow

A Fast and Easy to use microframework for the web.


A Fast and Easy to use microframework for the web.

Build Status Coverage Status Documentation Gitter Open Collective

Description

Crow is a C++ framework for creating HTTP or Websocket web services. It uses routing similar to Python's Flask which makes it easy to use. It is also extremely fast, beating multiple existing C++ frameworks as well as non-C++ frameworks.

Features

  • Easy Routing (similar to Flask).
  • Type-safe handlers.
  • Blazingly fast (see this benchmark and this benchmark).
  • Built in JSON support.
  • Mustache based templating library (crow::mustache).
  • Header only library (single header file available).
  • Middleware support for extensions.
  • HTTP/1.1 and Websocket support.
  • Multi-part request and response support.
  • Uses modern C++ (11/14)

Still in development

Documentation

Available here.

Warning

If you are using Crow v0.3, then you have to put #define CROW_MAIN at the top of one and only one source file.

Examples

Hello World

#include "crow.h"

int main()
{
    crow::SimpleApp app;

    CROW_ROUTE(app, "/")([](){
        return "Hello world";
    });

    app.port(18080).multithreaded().run();
}

JSON Response

CROW_ROUTE(app, "/json")
([]{
    crow::json::wvalue x({{"message", "Hello, World!"}});
    x["message2"] = "Hello, World.. Again!";
    return x;
});

Arguments

CROW_ROUTE(app,"/hello/<int>")
([](int count){
    if (count > 100)
        return crow::response(400);
    std::ostringstream os;
    os << count << " bottles of beer!";
    return crow::response(os.str());
});

Handler arguments type check at compile time

// Compile error with message "Handler type is mismatched with URL parameters"
CROW_ROUTE(app,"/another/<int>")
([](int a, int b){
    return crow::response(500);
});

Handling JSON Requests

CROW_ROUTE(app, "/add_json")
.methods("POST"_method)
([](const crow::request& req){
    auto x = crow::json::load(req.body);
    if (!x)
        return crow::response(crow::status::BAD_REQUEST); // same as crow::response(400)
    int sum = x["a"].i()+x["b"].i();
    std::ostringstream os;
    os << sum;
    return crow::response{os.str()};
});

More examples can be found here.

Setting Up / Building

Available here.

Disclaimer

CrowCpp/Crow is a project based on ipkn/crow. Neither CrowCpp, it's members, or this project have been associated with, or endorsed or supported by ipkn (Jaeseung Ha) in any way. We do use ipkn/crow's source code under the BSD-3 clause license and sometimes refer to the public comments available on the github repository. But we do not in any way claim to be associated with or in contact with ipkn (Jaeseung Ha) regarding CrowCpp or CrowCpp/Crow

Attributions

Crow has incorporated the following libraries into its source.

http-parser (used for converting http strings to crow::request objects)

https://github.com/nodejs/http-parser

http_parser.c is based on src/http/ngx_http_parse.c from NGINX copyright
Igor Sysoev.

Additional changes are licensed under the same terms as NGINX and
copyright Joyent, Inc. and other Node contributors. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE. 

---------------------------------------------------------------------------


qs_parse (used for reading query string parameters)

https://github.com/bartgrantham/qs_parse

Copyright (c) 2010 Bart Grantham
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

---------------------------------------------------------------------------


TinySHA1 (used during the websocket handshake, not for security)

https://github.com/mohaps/TinySHA1

TinySHA1 - a header only implementation of the SHA1 algorithm. Based on the
implementation in boost::uuid::details

Copyright (c) 2012-22 SAURAV MOHAPATRA [email protected]
Permission to use, copy, modify, and distribute this software for any purpose
with or without fee is hereby granted, provided that the above copyright 
notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, 
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM 
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

---------------------------------------------------------------------------


Catch2 (used only in unit tests, not part of the actual framework)

https://github.com/catchorg/Catch2

Boost Software License - Version 1.0 - August 17th, 2003

Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:

The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

topjohnwu/Magisk

The Magic Mask for Android


Downloads

This is not an officially supported Google product

Introduction

Magisk is a suite of open source software for customizing Android, supporting devices higher than Android 6.0.
Some highlight features:

  • MagiskSU: Provide root access for applications
  • Magisk Modules: Modify read-only partitions by installing modules
  • MagiskBoot: The most complete tool for unpacking and repacking Android boot images
  • Zygisk: Run code in every Android applications' processes

Downloads

Github is the only source where you can get official Magisk information and downloads.

Useful Links

Bug Reports

Only bug reports from Debug builds will be accepted.

For installation issues, upload both boot image and install logs.
For Magisk issues, upload boot logcat or dmesg.
For Magisk app crashes, record and upload the logcat when the crash occurs.

Translation Contributions

Default string resources for the Magisk app and its stub APK are located here:

  • app/src/main/res/values/strings.xml
  • stub/src/main/res/values/strings.xml

Translate each and place them in the respective locations ([module]/src/main/res/values-[lang]/strings.xml).

License

Magisk, including all git submodules are free software:
you can redistribute it and/or modify it under the terms of the
GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/>;.

me-no-dev/ESPAsyncWebServer

Async Web Server for ESP8266 and ESP32


ESPAsyncWebServer

Build Status Codacy Badge

For help and support Join the chat at https://gitter.im/me-no-dev/ESPAsyncWebServer

Async HTTP and WebSocket Server for ESP8266 Arduino

For ESP8266 it requires ESPAsyncTCP To use this library you might need to have the latest git versions of ESP8266 Arduino Core

For ESP32 it requires AsyncTCP to work To use this library you might need to have the latest git versions of ESP32 Arduino Core

Table of contents

Installation

Using PlatformIO

PlatformIO is an open source ecosystem for IoT development with cross platform build system, library manager and full support for Espressif ESP8266/ESP32 development. It works on the popular host OS: Mac OS X, Windows, Linux 32/64, Linux ARM (like Raspberry Pi, BeagleBone, CubieBoard).

  1. Install PlatformIO IDE
  2. Create new project using "PlatformIO Home > New Project"
  3. Update dev/platform to staging version:
  4. Add "ESP Async WebServer" to project using Project Configuration File platformio.ini and lib_deps option:
[env:myboard]
platform = espressif...
board = ...
framework = arduino

# using the latest stable version
lib_deps = ESP Async WebServer

# or using GIT Url (the latest development version)
lib_deps = https://github.com/me-no-dev/ESPAsyncWebServer.git
  1. Happy coding with PlatformIO!

Why should you care

  • Using asynchronous network means that you can handle more than one connection at the same time
  • You are called once the request is ready and parsed
  • When you send the response, you are immediately ready to handle other connections while the server is taking care of sending the response in the background
  • Speed is OMG
  • Easy to use API, HTTP Basic and Digest MD5 Authentication (default), ChunkedResponse
  • Easily extendible to handle any type of content
  • Supports Continue 100
  • Async WebSocket plugin offering different locations without extra servers or ports
  • Async EventSource (Server-Sent Events) plugin to send events to the browser
  • URL Rewrite plugin for conditional and permanent url rewrites
  • ServeStatic plugin that supports cache, Last-Modified, default index and more
  • Simple template processing engine to handle templates

Important things to remember

  • This is fully asynchronous server and as such does not run on the loop thread.
  • You can not use yield or delay or any function that uses them inside the callbacks
  • The server is smart enough to know when to close the connection and free resources
  • You can not send more than one response to a single request

Principles of operation

The Async Web server

  • Listens for connections
  • Wraps the new clients into Request
  • Keeps track of clients and cleans memory
  • Manages Rewrites and apply them on the request url
  • Manages Handlers and attaches them to Requests

Request Life Cycle

  • TCP connection is received by the server
  • The connection is wrapped inside Request object
  • When the request head is received (type, url, get params, http version and host), the server goes through all Rewrites (in the order they were added) to rewrite the url and inject query parameters, next, it goes through all attached Handlers(in the order they were added) trying to find one that canHandle the given request. If none are found, the default(catch-all) handler is attached.
  • The rest of the request is received, calling the handleUpload or handleBody methods of the Handler if they are needed (POST+File/Body)
  • When the whole request is parsed, the result is given to the handleRequest method of the Handler and is ready to be responded to
  • In the handleRequest method, to the Request is attached a Response object (see below) that will serve the response data back to the client
  • When the Response is sent, the client is closed and freed from the memory

Rewrites and how do they work

  • The Rewrites are used to rewrite the request url and/or inject get parameters for a specific request url path.
  • All Rewrites are evaluated on the request in the order they have been added to the server.
  • The Rewrite will change the request url only if the request url (excluding get parameters) is fully match the rewrite url, and when the optional Filter callback return true.
  • Setting a Filter to the Rewrite enables to control when to apply the rewrite, decision can be based on request url, http version, request host/port/target host, get parameters or the request client's localIP or remoteIP.
  • Two filter callbacks are provided: ON_AP_FILTER to execute the rewrite when request is made to the AP interface, ON_STA_FILTER to execute the rewrite when request is made to the STA interface.
  • The Rewrite can specify a target url with optional get parameters, e.g. /to-url?with=params

Handlers and how do they work

  • The Handlers are used for executing specific actions to particular requests
  • One Handler instance can be attached to any request and lives together with the server
  • Setting a Filter to the Handler enables to control when to apply the handler, decision can be based on request url, http version, request host/port/target host, get parameters or the request client's localIP or remoteIP.
  • Two filter callbacks are provided: ON_AP_FILTER to execute the rewrite when request is made to the AP interface, ON_STA_FILTER to execute the rewrite when request is made to the STA interface.
  • The canHandle method is used for handler specific control on whether the requests can be handled and for declaring any interesting headers that the Request should parse. Decision can be based on request method, request url, http version, request host/port/target host and get parameters
  • Once a Handler is attached to given Request (canHandle returned true) that Handler takes care to receive any file/data upload and attach a Response once the Request has been fully parsed
  • Handlers are evaluated in the order they are attached to the server. The canHandle is called only if the Filter that was set to the Handler return true.
  • The first Handler that can handle the request is selected, not further Filter and canHandle are called.

Responses and how do they work

  • The Response objects are used to send the response data back to the client
  • The Response object lives with the Request and is freed on end or disconnect
  • Different techniques are used depending on the response type to send the data in packets returning back almost immediately and sending the next packet when this one is received. Any time in between is spent to run the user loop and handle other network packets
  • Responding asynchronously is probably the most difficult thing for most to understand
  • Many different options exist for the user to make responding a background task

Template processing

  • ESPAsyncWebserver contains simple template processing engine.
  • Template processing can be added to most response types.
  • Currently it supports only replacing template placeholders with actual values. No conditional processing, cycles, etc.
  • Placeholders are delimited with % symbols. Like this: %TEMPLATE_PLACEHOLDER%.
  • It works by extracting placeholder name from response text and passing it to user provided function which should return actual value to be used instead of placeholder.
  • Since it's user provided function, it is possible for library users to implement conditional processing and cycles themselves.
  • Since it's impossible to know the actual response size after template processing step in advance (and, therefore, to include it in response headers), the response becomes chunked.

Libraries and projects that use AsyncWebServer

  • WebSocketToSerial - Debug serial devices through the web browser
  • Sattrack - Track the ISS with ESP8266
  • ESP Radio - Icecast radio based on ESP8266 and VS1053
  • VZero - the Wireless zero-config controller for volkszaehler.org
  • ESPurna - ESPurna ("spark" in Catalan) is a custom C firmware for ESP8266 based smart switches. It was originally developed with the ITead Sonoff in mind.
  • fauxmoESP - Belkin WeMo emulator library for ESP8266.
  • ESP-RFID - MFRC522 RFID Access Control Management project for ESP8266.

Request Variables

Common Variables

request->version();       // uint8_t: 0 = HTTP/1.0, 1 = HTTP/1.1
request->method();        // enum:    HTTP_GET, HTTP_POST, HTTP_DELETE, HTTP_PUT, HTTP_PATCH, HTTP_HEAD, HTTP_OPTIONS
request->url();           // String:  URL of the request (not including host, port or GET parameters)
request->host();          // String:  The requested host (can be used for virtual hosting)
request->contentType();   // String:  ContentType of the request (not avaiable in Handler::canHandle)
request->contentLength(); // size_t:  ContentLength of the request (not avaiable in Handler::canHandle)
request->multipart();     // bool:    True if the request has content type "multipart"

Headers

//List all collected headers
int headers = request->headers();
int i;
for(i=0;i<headers;i++){
  AsyncWebHeader* h = request->getHeader(i);
  Serial.printf("HEADER[%s]: %s\n", h->name().c_str(), h->value().c_str());
}

//get specific header by name
if(request->hasHeader("MyHeader")){
  AsyncWebHeader* h = request->getHeader("MyHeader");
  Serial.printf("MyHeader: %s\n", h->value().c_str());
}

//List all collected headers (Compatibility)
int headers = request->headers();
int i;
for(i=0;i<headers;i++){
  Serial.printf("HEADER[%s]: %s\n", request->headerName(i).c_str(), request->header(i).c_str());
}

//get specific header by name (Compatibility)
if(request->hasHeader("MyHeader")){
  Serial.printf("MyHeader: %s\n", request->header("MyHeader").c_str());
}

GET, POST and FILE parameters

//List all parameters
int params = request->params();
for(int i=0;i<params;i++){
  AsyncWebParameter* p = request->getParam(i);
  if(p->isFile()){ //p->isPost() is also true
    Serial.printf("FILE[%s]: %s, size: %u\n", p->name().c_str(), p->value().c_str(), p->size());
  } else if(p->isPost()){
    Serial.printf("POST[%s]: %s\n", p->name().c_str(), p->value().c_str());
  } else {
    Serial.printf("GET[%s]: %s\n", p->name().c_str(), p->value().c_str());
  }
}

//Check if GET parameter exists
if(request->hasParam("download"))
  AsyncWebParameter* p = request->getParam("download");

//Check if POST (but not File) parameter exists
if(request->hasParam("download", true))
  AsyncWebParameter* p = request->getParam("download", true);

//Check if FILE was uploaded
if(request->hasParam("download", true, true))
  AsyncWebParameter* p = request->getParam("download", true, true);

//List all parameters (Compatibility)
int args = request->args();
for(int i=0;i<args;i++){
  Serial.printf("ARG[%s]: %s\n", request->argName(i).c_str(), request->arg(i).c_str());
}

//Check if parameter exists (Compatibility)
if(request->hasArg("download"))
  String arg = request->arg("download");

FILE Upload handling

void handleUpload(AsyncWebServerRequest *request, String filename, size_t index, uint8_t *data, size_t len, bool final){
  if(!index){
    Serial.printf("UploadStart: %s\n", filename.c_str());
  }
  for(size_t i=0; i<len; i++){
    Serial.write(data[i]);
  }
  if(final){
    Serial.printf("UploadEnd: %s, %u B\n", filename.c_str(), index+len);
  }
}

Body data handling

void handleBody(AsyncWebServerRequest *request, uint8_t *data, size_t len, size_t index, size_t total){
  if(!index){
    Serial.printf("BodyStart: %u B\n", total);
  }
  for(size_t i=0; i<len; i++){
    Serial.write(data[i]);
  }
  if(index + len == total){
    Serial.printf("BodyEnd: %u B\n", total);
  }
}

If needed, the _tempObject field on the request can be used to store a pointer to temporary data (e.g. from the body) associated with the request. If assigned, the pointer will automatically be freed along with the request.

JSON body handling with ArduinoJson

Endpoints which consume JSON can use a special handler to get ready to use JSON data in the request callback:

#include "AsyncJson.h"
#include "ArduinoJson.h"

AsyncCallbackJsonWebHandler* handler = new AsyncCallbackJsonWebHandler("/rest/endpoint", [](AsyncWebServerRequest *request, JsonVariant &json) {
  JsonObject& jsonObj = json.as<JsonObject>();
  // ...
});
server.addHandler(handler);

Responses

Redirect to another URL

//to local url
request->redirect("/login");

//to external url
request->redirect("http://esp8266.com");

Basic response with HTTP Code

request->send(404); //Sends 404 File Not Found

Basic response with HTTP Code and extra headers

AsyncWebServerResponse *response = request->beginResponse(404); //Sends 404 File Not Found
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Basic response with string content

request->send(200, "text/plain", "Hello World!");

Basic response with string content and extra headers

AsyncWebServerResponse *response = request->beginResponse(200, "text/plain", "Hello World!");
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Send large webpage from PROGMEM

const char index_html[] PROGMEM = "..."; // large char array, tested with 14k
request->send_P(200, "text/html", index_html);

Send large webpage from PROGMEM and extra headers

const char index_html[] PROGMEM = "..."; // large char array, tested with 14k
AsyncWebServerResponse *response = request->beginResponse_P(200, "text/html", index_html);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Send large webpage from PROGMEM containing templates

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

const char index_html[] PROGMEM = "..."; // large char array, tested with 14k
request->send_P(200, "text/html", index_html, processor);

Send large webpage from PROGMEM containing templates and extra headers

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

const char index_html[] PROGMEM = "..."; // large char array, tested with 14k
AsyncWebServerResponse *response = request->beginResponse_P(200, "text/html", index_html, processor);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Send binary content from PROGMEM


//File: favicon.ico.gz, Size: 726
#define favicon_ico_gz_len 726
const uint8_t favicon_ico_gz[] PROGMEM = {
 0x1F, 0x8B, 0x08, 0x08, 0x0B, 0x87, 0x90, 0x57, 0x00, 0x03, 0x66, 0x61, 0x76, 0x69, 0x63, 0x6F,
 0x6E, 0x2E, 0x69, 0x63, 0x6F, 0x00, 0xCD, 0x53, 0x5F, 0x48, 0x9A, 0x51, 0x14, 0xBF, 0x62, 0x6D,
 0x86, 0x96, 0xA9, 0x64, 0xD3, 0xFE, 0xA8, 0x99, 0x65, 0x1A, 0xB4, 0x8A, 0xA8, 0x51, 0x54, 0x23,
 0xA8, 0x11, 0x49, 0x51, 0x8A, 0x34, 0x62, 0x93, 0x85, 0x31, 0x58, 0x44, 0x12, 0x45, 0x2D, 0x58,
 0xF5, 0x52, 0x41, 0x10, 0x23, 0x82, 0xA0, 0x20, 0x98, 0x2F, 0xC1, 0x26, 0xED, 0xA1, 0x20, 0x89,
 0x04, 0xD7, 0x83, 0x58, 0x20, 0x28, 0x04, 0xAB, 0xD1, 0x9B, 0x8C, 0xE5, 0xC3, 0x60, 0x32, 0x64,
 0x0E, 0x56, 0xBF, 0x9D, 0xEF, 0xF6, 0x30, 0x82, 0xED, 0xAD, 0x87, 0xDD, 0x8F, 0xF3, 0xDD, 0x8F,
 0x73, 0xCF, 0xEF, 0x9C, 0xDF, 0x39, 0xBF, 0xFB, 0x31, 0x26, 0xA2, 0x27, 0x37, 0x97, 0xD1, 0x5B,
 0xCF, 0x9E, 0x67, 0x30, 0xA6, 0x66, 0x8C, 0x99, 0xC9, 0xC8, 0x45, 0x9E, 0x6B, 0x3F, 0x5F, 0x74,
 0xA6, 0x94, 0x5E, 0xDB, 0xFF, 0xB2, 0xE6, 0xE7, 0xE7, 0xF9, 0xDE, 0xD6, 0xD6, 0x96, 0xDB, 0xD8,
 0xD8, 0x78, 0xBF, 0xA1, 0xA1, 0xC1, 0xDA, 0xDC, 0xDC, 0x2C, 0xEB, 0xED, 0xED, 0x15, 0x9B, 0xCD,
 0xE6, 0x4A, 0x83, 0xC1, 0xE0, 0x2E, 0x29, 0x29, 0x99, 0xD6, 0x6A, 0xB5, 0x4F, 0x75, 0x3A, 0x9D,
 0x61, 0x75, 0x75, 0x95, 0xB5, 0xB7, 0xB7, 0xDF, 0xC8, 0xD1, 0xD4, 0xD4, 0xF4, 0xB0, 0xBA, 0xBA,
 0xFA, 0x83, 0xD5, 0x6A, 0xFD, 0x5A, 0x5E, 0x5E, 0x9E, 0x28, 0x2D, 0x2D, 0x0D, 0x10, 0xC6, 0x4B,
 0x98, 0x78, 0x5E, 0x5E, 0xDE, 0x95, 0x42, 0xA1, 0x40, 0x4E, 0x4E, 0xCE, 0x65, 0x76, 0x76, 0xF6,
 0x47, 0xB5, 0x5A, 0x6D, 0x4F, 0x26, 0x93, 0xA2, 0xD6, 0xD6, 0x56, 0x8E, 0x6D, 0x69, 0x69, 0xD1,
 0x11, 0x36, 0x62, 0xB1, 0x58, 0x60, 0x32, 0x99, 0xA0, 0xD7, 0xEB, 0x51, 0x58, 0x58, 0x88, 0xFC,
 0xFC, 0x7C, 0x10, 0x16, 0x02, 0x56, 0x2E, 0x97, 0x43, 0x2A, 0x95, 0x42, 0x2C, 0x16, 0x23, 0x33,
 0x33, 0x33, 0xAE, 0x52, 0xA9, 0x1E, 0x64, 0x65, 0x65, 0x71, 0x7C, 0x7D, 0x7D, 0xBD, 0x93, 0xEA,
 0xFE, 0x30, 0x1A, 0x8D, 0xE8, 0xEC, 0xEC, 0xC4, 0xE2, 0xE2, 0x22, 0x6A, 0x6A, 0x6A, 0x40, 0x39,
 0x41, 0xB5, 0x38, 0x4E, 0xC8, 0x33, 0x3C, 0x3C, 0x0C, 0x87, 0xC3, 0xC1, 0x6B, 0x54, 0x54, 0x54,
 0xBC, 0xE9, 0xEB, 0xEB, 0x93, 0x5F, 0x5C, 0x5C, 0x30, 0x8A, 0x9D, 0x2E, 0x2B, 0x2B, 0xBB, 0xA2,
 0x3E, 0x41, 0xBD, 0x21, 0x1E, 0x8F, 0x63, 0x6A, 0x6A, 0x0A, 0x81, 0x40, 0x00, 0x94, 0x1B, 0x3D,
 0x3D, 0x3D, 0x42, 0x3C, 0x96, 0x96, 0x96, 0x70, 0x7E, 0x7E, 0x8E, 0xE3, 0xE3, 0x63, 0xF8, 0xFD,
 0xFE, 0xB4, 0xD7, 0xEB, 0xF5, 0x8F, 0x8F, 0x8F, 0x5B, 0x68, 0x5E, 0x6F, 0x05, 0xCE, 0xB4, 0xE3,
 0xE8, 0xE8, 0x08, 0x27, 0x27, 0x27, 0xD8, 0xDF, 0xDF, 0xC7, 0xD9, 0xD9, 0x19, 0x6C, 0x36, 0x1B,
 0x36, 0x36, 0x36, 0x38, 0x9F, 0x85, 0x85, 0x05, 0xAC, 0xAF, 0xAF, 0x23, 0x1A, 0x8D, 0x22, 0x91,
 0x48, 0x20, 0x16, 0x8B, 0xFD, 0xDA, 0xDA, 0xDA, 0x7A, 0x41, 0x33, 0x7E, 0x57, 0x50, 0x50, 0x80,
 0x89, 0x89, 0x09, 0x84, 0xC3, 0x61, 0x6C, 0x6F, 0x6F, 0x23, 0x12, 0x89, 0xE0, 0xE0, 0xE0, 0x00,
 0x43, 0x43, 0x43, 0x58, 0x5E, 0x5E, 0xE6, 0x9C, 0x7D, 0x3E, 0x1F, 0x46, 0x47, 0x47, 0x79, 0xBE,
 0xBD, 0xBD, 0x3D, 0xE1, 0x3C, 0x1D, 0x0C, 0x06, 0x9F, 0x10, 0xB7, 0xC7, 0x84, 0x4F, 0xF6, 0xF7,
 0xF7, 0x63, 0x60, 0x60, 0x00, 0x83, 0x83, 0x83, 0x18, 0x19, 0x19, 0xC1, 0xDC, 0xDC, 0x1C, 0x8F,
 0x17, 0x7C, 0xA4, 0x27, 0xE7, 0x34, 0x39, 0x39, 0x89, 0x9D, 0x9D, 0x1D, 0x6E, 0x54, 0xE3, 0x13,
 0xE5, 0x34, 0x11, 0x37, 0x49, 0x51, 0x51, 0xD1, 0x4B, 0xA5, 0x52, 0xF9, 0x45, 0x26, 0x93, 0x5D,
 0x0A, 0xF3, 0x92, 0x48, 0x24, 0xA0, 0x6F, 0x14, 0x17, 0x17, 0xA3, 0xB6, 0xB6, 0x16, 0x5D, 0x5D,
 0x5D, 0x7C, 0x1E, 0xBB, 0xBB, 0xBB, 0x9C, 0xD7, 0xE1, 0xE1, 0x21, 0x42, 0xA1, 0xD0, 0x6B, 0xD2,
 0x45, 0x4C, 0x33, 0x12, 0x34, 0xCC, 0xA0, 0x19, 0x54, 0x92, 0x56, 0x0E, 0xD2, 0xD9, 0x43, 0xF8,
 0xCF, 0x82, 0x56, 0xC2, 0xDC, 0xEB, 0xEA, 0xEA, 0x38, 0x7E, 0x6C, 0x6C, 0x4C, 0xE0, 0xFE, 0x9D,
 0xB8, 0xBF, 0xA7, 0xFA, 0xAF, 0x56, 0x56, 0x56, 0xEE, 0x6D, 0x6E, 0x6E, 0xDE, 0xB8, 0x47, 0x55,
 0x55, 0x55, 0x6C, 0x66, 0x66, 0x46, 0x44, 0xDA, 0x3B, 0x34, 0x1A, 0x4D, 0x94, 0xB0, 0x3F, 0x09,
 0x7B, 0x45, 0xBD, 0xA5, 0x5D, 0x2E, 0x57, 0x8C, 0x7A, 0x73, 0xD9, 0xED, 0xF6, 0x3B, 0x84, 0xFF,
 0xE7, 0x7D, 0xA6, 0x3A, 0x2C, 0x95, 0x4A, 0xB1, 0x8E, 0x8E, 0x0E, 0x6D, 0x77, 0x77, 0xB7, 0xCD,
 0xE9, 0x74, 0x3E, 0x73, 0xBB, 0xDD, 0x8F, 0x3C, 0x1E, 0x8F, 0xE6, 0xF4, 0xF4, 0x94, 0xAD, 0xAD,
 0xAD, 0xDD, 0xDE, 0xCF, 0x73, 0x0B, 0x0B, 0xB8, 0xB6, 0xE0, 0x5D, 0xC6, 0x66, 0xC5, 0xE4, 0x10,
 0x4C, 0xF4, 0xF7, 0xD8, 0x59, 0xF2, 0x7F, 0xA3, 0xB8, 0xB4, 0xFC, 0x0F, 0xEE, 0x37, 0x70, 0xEC,
 0x16, 0x4A, 0x7E, 0x04, 0x00, 0x00
};

AsyncWebServerResponse *response = request->beginResponse_P(200, "image/x-icon", favicon_ico_gz, favicon_ico_gz_len);
response->addHeader("Content-Encoding", "gzip");
request->send(response);

Respond with content coming from a Stream

//read 12 bytes from Serial and send them as Content Type text/plain
request->send(Serial, "text/plain", 12);

Respond with content coming from a Stream and extra headers

//read 12 bytes from Serial and send them as Content Type text/plain
AsyncWebServerResponse *response = request->beginResponse(Serial, "text/plain", 12);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Respond with content coming from a Stream containing templates

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

//read 12 bytes from Serial and send them as Content Type text/plain
request->send(Serial, "text/plain", 12, processor);

Respond with content coming from a Stream containing templates and extra headers

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

//read 12 bytes from Serial and send them as Content Type text/plain
AsyncWebServerResponse *response = request->beginResponse(Serial, "text/plain", 12, processor);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Respond with content coming from a File

//Send index.htm with default content type
request->send(SPIFFS, "/index.htm");

//Send index.htm as text
request->send(SPIFFS, "/index.htm", "text/plain");

//Download index.htm
request->send(SPIFFS, "/index.htm", String(), true);

Respond with content coming from a File and extra headers

//Send index.htm with default content type
AsyncWebServerResponse *response = request->beginResponse(SPIFFS, "/index.htm");

//Send index.htm as text
AsyncWebServerResponse *response = request->beginResponse(SPIFFS, "/index.htm", "text/plain");

//Download index.htm
AsyncWebServerResponse *response = request->beginResponse(SPIFFS, "/index.htm", String(), true);

response->addHeader("Server","ESP Async Web Server");
request->send(response);

Respond with content coming from a File containing templates

Internally uses Chunked Response.

Index.htm contents:

%HELLO_FROM_TEMPLATE%

Somewhere in source files:

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

//Send index.htm with template processor function
request->send(SPIFFS, "/index.htm", String(), false, processor);

Respond with content using a callback

//send 128 bytes as plain text
request->send("text/plain", 128, [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will not be asked for more bytes once the content length has been reached.
  //Keep in mind that you can not delay or yield waiting for more data!
  //Send what you currently have and you will be asked for more again
  return mySource.read(buffer, maxLen);
});

Respond with content using a callback and extra headers

//send 128 bytes as plain text
AsyncWebServerResponse *response = request->beginResponse("text/plain", 128, [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will not be asked for more bytes once the content length has been reached.
  //Keep in mind that you can not delay or yield waiting for more data!
  //Send what you currently have and you will be asked for more again
  return mySource.read(buffer, maxLen);
});
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Respond with content using a callback containing templates

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

//send 128 bytes as plain text
request->send("text/plain", 128, [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will not be asked for more bytes once the content length has been reached.
  //Keep in mind that you can not delay or yield waiting for more data!
  //Send what you currently have and you will be asked for more again
  return mySource.read(buffer, maxLen);
}, processor);

Respond with content using a callback containing templates and extra headers

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

//send 128 bytes as plain text
AsyncWebServerResponse *response = request->beginResponse("text/plain", 128, [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will not be asked for more bytes once the content length has been reached.
  //Keep in mind that you can not delay or yield waiting for more data!
  //Send what you currently have and you will be asked for more again
  return mySource.read(buffer, maxLen);
}, processor);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Chunked Response

Used when content length is unknown. Works best if the client supports HTTP/1.1

AsyncWebServerResponse *response = request->beginChunkedResponse("text/plain", [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will be asked for more data until 0 is returned
  //Keep in mind that you can not delay or yield waiting for more data!
  return mySource.read(buffer, maxLen);
});
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Chunked Response containing templates

Used when content length is unknown. Works best if the client supports HTTP/1.1

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

AsyncWebServerResponse *response = request->beginChunkedResponse("text/plain", [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //index equals the amount of bytes that have been already sent
  //You will be asked for more data until 0 is returned
  //Keep in mind that you can not delay or yield waiting for more data!
  return mySource.read(buffer, maxLen);
}, processor);
response->addHeader("Server","ESP Async Web Server");
request->send(response);

Print to response

AsyncResponseStream *response = request->beginResponseStream("text/html");
response->addHeader("Server","ESP Async Web Server");
response->printf("<!DOCTYPE html><html><head><title>Webpage at %s</title></head><body>", request->url().c_str());

response->print("<h2>Hello ");
response->print(request->client()->remoteIP());
response->print("</h2>");

response->print("<h3>General</h3>");
response->print("<ul>");
response->printf("<li>Version: HTTP/1.%u</li>", request->version());
response->printf("<li>Method: %s</li>", request->methodToString());
response->printf("<li>URL: %s</li>", request->url().c_str());
response->printf("<li>Host: %s</li>", request->host().c_str());
response->printf("<li>ContentType: %s</li>", request->contentType().c_str());
response->printf("<li>ContentLength: %u</li>", request->contentLength());
response->printf("<li>Multipart: %s</li>", request->multipart()?"true":"false");
response->print("</ul>");

response->print("<h3>Headers</h3>");
response->print("<ul>");
int headers = request->headers();
for(int i=0;i<headers;i++){
  AsyncWebHeader* h = request->getHeader(i);
  response->printf("<li>%s: %s</li>", h->name().c_str(), h->value().c_str());
}
response->print("</ul>");

response->print("<h3>Parameters</h3>");
response->print("<ul>");
int params = request->params();
for(int i=0;i<params;i++){
  AsyncWebParameter* p = request->getParam(i);
  if(p->isFile()){
    response->printf("<li>FILE[%s]: %s, size: %u</li>", p->name().c_str(), p->value().c_str(), p->size());
  } else if(p->isPost()){
    response->printf("<li>POST[%s]: %s</li>", p->name().c_str(), p->value().c_str());
  } else {
    response->printf("<li>GET[%s]: %s</li>", p->name().c_str(), p->value().c_str());
  }
}
response->print("</ul>");

response->print("</body></html>");
//send the response last
request->send(response);

ArduinoJson Basic Response

This way of sending Json is great for when the result is below 4KB

#include "AsyncJson.h"
#include "ArduinoJson.h"


AsyncResponseStream *response = request->beginResponseStream("application/json");
DynamicJsonBuffer jsonBuffer;
JsonObject &root = jsonBuffer.createObject();
root["heap"] = ESP.getFreeHeap();
root["ssid"] = WiFi.SSID();
root.printTo(*response);
request->send(response);

ArduinoJson Advanced Response

This response can handle really large Json objects (tested to 40KB) There isn't any noticeable speed decrease for small results with the method above Since ArduinoJson does not allow reading parts of the string, the whole Json has to be passed every time a chunks needs to be sent, which shows speed decrease proportional to the resulting json packets

#include "AsyncJson.h"
#include "ArduinoJson.h"


AsyncJsonResponse * response = new AsyncJsonResponse();
response->addHeader("Server","ESP Async Web Server");
JsonObject& root = response->getRoot();
root["heap"] = ESP.getFreeHeap();
root["ssid"] = WiFi.SSID();
response->setLength();
request->send(response);

Serving static files

In addition to serving files from SPIFFS as described above, the server provide a dedicated handler that optimize the performance of serving files from SPIFFS - AsyncStaticWebHandler. Use server.serveStatic() function to initialize and add a new instance of AsyncStaticWebHandler to the server. The Handler will not handle the request if the file does not exists, e.g. the server will continue to look for another handler that can handle the request. Notice that you can chain setter functions to setup the handler, or keep a pointer to change it at a later time.

Serving specific file by name

// Serve the file "/www/page.htm" when request url is "/page.htm"
server.serveStatic("/page.htm", SPIFFS, "/www/page.htm");

Serving files in directory

To serve files in a directory, the path to the files should specify a directory in SPIFFS and ends with "/".

// Serve files in directory "/www/" when request url starts with "/"
// Request to the root or none existing files will try to server the defualt
// file name "index.htm" if exists
server.serveStatic("/", SPIFFS, "/www/");

// Server with different default file
server.serveStatic("/", SPIFFS, "/www/").setDefaultFile("default.html");

Serving static files with authentication

server
    .serveStatic("/", SPIFFS, "/www/")
    .setDefaultFile("default.html")
    .setAuthentication("user", "pass");

Specifying Cache-Control header

It is possible to specify Cache-Control header value to reduce the number of calls to the server once the client loaded the files. For more information on Cache-Control values see Cache-Control

// Cache responses for 10 minutes (600 seconds)
server.serveStatic("/", SPIFFS, "/www/").setCacheControl("max-age=600");

//*** Change Cache-Control after server setup ***

// During setup - keep a pointer to the handler
AsyncStaticWebHandler* handler = &server.serveStatic("/", SPIFFS, "/www/").setCacheControl("max-age=600");

// At a later event - change Cache-Control
handler->setCacheControl("max-age=30");

Specifying Date-Modified header

It is possible to specify Date-Modified header to enable the server to return Not-Modified (304) response for requests with "If-Modified-Since" header with the same value, instead of responding with the actual file content.

// Update the date modified string every time files are updated
server.serveStatic("/", SPIFFS, "/www/").setLastModified("Mon, 20 Jun 2016 14:00:00 GMT");

//*** Chage last modified value at a later stage ***

// During setup - read last modified value from config or EEPROM
String date_modified = loadDateModified();
AsyncStaticWebHandler* handler = &server.serveStatic("/", SPIFFS, "/www/");
handler->setLastModified(date_modified);

// At a later event when files are updated
String date_modified = getNewDateModfied();
saveDateModified(date_modified); // Save for next reset
handler->setLastModified(date_modified);

Specifying Template Processor callback

It is possible to specify template processor for static files. For information on template processor see Respond with content coming from a File containing templates.

String processor(const String& var)
{
  if(var == "HELLO_FROM_TEMPLATE")
    return F("Hello world!");
  return String();
}

// ...

server.serveStatic("/", SPIFFS, "/www/").setTemplateProcessor(processor);

Param Rewrite With Matching

It is possible to rewrite the request url with parameter matchg. Here is an example with one parameter: Rewrite for example "/radio/{frequence}" -> "/radio?f={frequence}"

class OneParamRewrite : public AsyncWebRewrite
{
  protected:
    String _urlPrefix;
    int _paramIndex;
    String _paramsBackup;

  public:
  OneParamRewrite(const char* from, const char* to)
    : AsyncWebRewrite(from, to) {

      _paramIndex = _from.indexOf('{');

      if( _paramIndex >=0 && _from.endsWith("}")) {
        _urlPrefix = _from.substring(0, _paramIndex);
        int index = _params.indexOf('{');
        if(index >= 0) {
          _params = _params.substring(0, index);
        }
      } else {
        _urlPrefix = _from;
      }
      _paramsBackup = _params;
  }

  bool match(AsyncWebServerRequest *request) override {
    if(request->url().startsWith(_urlPrefix)) {
      if(_paramIndex >= 0) {
        _params = _paramsBackup + request->url().substring(_paramIndex);
      } else {
        _params = _paramsBackup;
      }
    return true;

    } else {
      return false;
    }
  }
};

Usage:

  server.addRewrite( new OneParamRewrite("/radio/{frequence}", "/radio?f={frequence}") );

Using filters

Filters can be set to Rewrite or Handler in order to control when to apply the rewrite and consider the handler. A filter is a callback function that evaluates the request and return a boolean true to include the item or false to exclude it. Two filter callback are provided for convince:

  • ON_STA_FILTER - return true when requests are made to the STA (station mode) interface.
  • ON_AP_FILTER - return true when requests are made to the AP (access point) interface.

Serve different site files in AP mode

server.serveStatic("/", SPIFFS, "/www/").setFilter(ON_STA_FILTER);
server.serveStatic("/", SPIFFS, "/ap/").setFilter(ON_AP_FILTER);

Rewrite to different index on AP

// Serve the file "/www/index-ap.htm" in AP, and the file "/www/index.htm" on STA
server.rewrite("/", "index.htm");
server.rewrite("/index.htm", "index-ap.htm").setFilter(ON_AP_FILTER);
server.serveStatic("/", SPIFFS, "/www/");

Serving different hosts

// Filter callback using request host
bool filterOnHost1(AsyncWebServerRequest *request) { return request->host() == "host1"; }

// Server setup: server files in "/host1/" to requests for "host1", and files in "/www/" otherwise.
server.serveStatic("/", SPIFFS, "/host1/").setFilter(filterOnHost1);
server.serveStatic("/", SPIFFS, "/www/");

Determine interface inside callbacks

  String RedirectUrl = "http://";
  if (ON_STA_FILTER(request)) {
    RedirectUrl += WiFi.localIP().toString();
  } else {
    RedirectUrl += WiFi.softAPIP().toString();
  }
  RedirectUrl += "/index.htm";
  request->redirect(RedirectUrl);

Bad Responses

Some responses are implemented, but you should not use them, because they do not conform to HTTP. The following example will lead to unclean close of the connection and more time wasted than providing the length of the content

Respond with content using a callback without content length to HTTP/1.0 clients

//This is used as fallback for chunked responses to HTTP/1.0 Clients
request->send("text/plain", 0, [](uint8_t *buffer, size_t maxLen, size_t index) -> size_t {
  //Write up to "maxLen" bytes into "buffer" and return the amount written.
  //You will be asked for more data until 0 is returned
  //Keep in mind that you can not delay or yield waiting for more data!
  return mySource.read(buffer, maxLen);
});

Async WebSocket Plugin

The server includes a web socket plugin which lets you define different WebSocket locations to connect to without starting another listening service or using different port

Async WebSocket Event


void onEvent(AsyncWebSocket * server, AsyncWebSocketClient * client, AwsEventType type, void * arg, uint8_t *data, size_t len){
  if(type == WS_EVT_CONNECT){
    //client connected
    os_printf("ws[%s][%u] connect\n", server->url(), client->id());
    client->printf("Hello Client %u :)", client->id());
    client->ping();
  } else if(type == WS_EVT_DISCONNECT){
    //client disconnected
    os_printf("ws[%s][%u] disconnect: %u\n", server->url(), client->id());
  } else if(type == WS_EVT_ERROR){
    //error was received from the other end
    os_printf("ws[%s][%u] error(%u): %s\n", server->url(), client->id(), *((uint16_t*)arg), (char*)data);
  } else if(type == WS_EVT_PONG){
    //pong message was received (in response to a ping request maybe)
    os_printf("ws[%s][%u] pong[%u]: %s\n", server->url(), client->id(), len, (len)?(char*)data:"");
  } else if(type == WS_EVT_DATA){
    //data packet
    AwsFrameInfo * info = (AwsFrameInfo*)arg;
    if(info->final && info->index == 0 && info->len == len){
      //the whole message is in a single frame and we got all of it's data
      os_printf("ws[%s][%u] %s-message[%llu]: ", server->url(), client->id(), (info->opcode == WS_TEXT)?"text":"binary", info->len);
      if(info->opcode == WS_TEXT){
        data[len] = 0;
        os_printf("%s\n", (char*)data);
      } else {
        for(size_t i=0; i < info->len; i++){
          os_printf("%02x ", data[i]);
        }
        os_printf("\n");
      }
      if(info->opcode == WS_TEXT)
        client->text("I got your text message");
      else
        client->binary("I got your binary message");
    } else {
      //message is comprised of multiple frames or the frame is split into multiple packets
      if(info->index == 0){
        if(info->num == 0)
          os_printf("ws[%s][%u] %s-message start\n", server->url(), client->id(), (info->message_opcode == WS_TEXT)?"text":"binary");
        os_printf("ws[%s][%u] frame[%u] start[%llu]\n", server->url(), client->id(), info->num, info->len);
      }

      os_printf("ws[%s][%u] frame[%u] %s[%llu - %llu]: ", server->url(), client->id(), info->num, (info->message_opcode == WS_TEXT)?"text":"binary", info->index, info->index + len);
      if(info->message_opcode == WS_TEXT){
        data[len] = 0;
        os_printf("%s\n", (char*)data);
      } else {
        for(size_t i=0; i < len; i++){
          os_printf("%02x ", data[i]);
        }
        os_printf("\n");
      }

      if((info->index + len) == info->len){
        os_printf("ws[%s][%u] frame[%u] end[%llu]\n", server->url(), client->id(), info->num, info->len);
        if(info->final){
          os_printf("ws[%s][%u] %s-message end\n", server->url(), client->id(), (info->message_opcode == WS_TEXT)?"text":"binary");
          if(info->message_opcode == WS_TEXT)
            client->text("I got your text message");
          else
            client->binary("I got your binary message");
        }
      }
    }
  }
}

Methods for sending data to a socket client




//Server methods
AsyncWebSocket ws("/ws");
//printf to a client
ws.printf((uint32_t)client_id, arguments...);
//printf to all clients
ws.printfAll(arguments...);
//printf_P to a client
ws.printf_P((uint32_t)client_id, PSTR(format), arguments...);
//printfAll_P to all clients
ws.printfAll_P(PSTR(format), arguments...);
//send text to a client
ws.text((uint32_t)client_id, (char*)text);
ws.text((uint32_t)client_id, (uint8_t*)text, (size_t)len);
//send text from PROGMEM to a client
ws.text((uint32_t)client_id, PSTR("text"));
const char flash_text[] PROGMEM = "Text to send"
ws.text((uint32_t)client_id, FPSTR(flash_text));
//send text to all clients
ws.textAll((char*)text);
ws.textAll((uint8_t*)text, (size_t)len);
//send binary to a client
ws.binary((uint32_t)client_id, (char*)binary);
ws.binary((uint32_t)client_id, (uint8_t*)binary, (size_t)len);
//send binary from PROGMEM to a client
const uint8_t flash_binary[] PROGMEM = { 0x01, 0x02, 0x03, 0x04 };
ws.binary((uint32_t)client_id, flash_binary, 4);
//send binary to all clients
ws.binaryAll((char*)binary);
ws.binaryAll((uint8_t*)binary, (size_t)len);
//HTTP Authenticate before switch to Websocket protocol
ws.setAuthentication("user", "pass");

//client methods
AsyncWebSocketClient * client;
//printf
client->printf(arguments...);
//printf_P
client->printf_P(PSTR(format), arguments...);
//send text
client->text((char*)text);
client->text((uint8_t*)text, (size_t)len);
//send text from PROGMEM
client->text(PSTR("text"));
const char flash_text[] PROGMEM = "Text to send";
client->text(FPSTR(flash_text));
//send binary
client->binary((char*)binary);
client->binary((uint8_t*)binary, (size_t)len);
//send binary from PROGMEM
const uint8_t flash_binary[] PROGMEM = { 0x01, 0x02, 0x03, 0x04 };
client->binary(flash_binary, 4);

Direct access to web socket message buffer

When sending a web socket message using the above methods a buffer is created. Under certain circumstances you might want to manipulate or populate this buffer directly from your application, for example to prevent unnecessary duplications of the data. This example below shows how to create a buffer and print data to it from an ArduinoJson object then send it.

void sendDataWs(AsyncWebSocketClient * client)
{
    DynamicJsonBuffer jsonBuffer;
    JsonObject& root = jsonBuffer.createObject();
    root["a"] = "abc";
    root["b"] = "abcd";
    root["c"] = "abcde";
    root["d"] = "abcdef";
    root["e"] = "abcdefg";
    size_t len = root.measureLength();
    AsyncWebSocketMessageBuffer * buffer = ws.makeBuffer(len); //  creates a buffer (len + 1) for you.
    if (buffer) {
        root.printTo((char *)buffer->get(), len + 1);
        if (client) {
            client->text(buffer);
        } else {
            ws.textAll(buffer);
        }
    }
}

Limiting the number of web socket clients

Browsers sometimes do not correctly close the websocket connection, even when the close() function is called in javascript. This will eventually exhaust the web server's resources and will cause the server to crash. Periodically calling the cleanClients() function from the main loop() function limits the number of clients by closing the oldest client when the maximum number of clients has been exceeded. This can called be every cycle, however, if you wish to use less power, then calling as infrequently as once per second is sufficient.

void loop(){
  ws.cleanupClients();
}

Async Event Source Plugin

The server includes EventSource (Server-Sent Events) plugin which can be used to send short text events to the browser. Difference between EventSource and WebSockets is that EventSource is single direction, text-only protocol.

Setup Event Source on the server

AsyncWebServer server(80);
AsyncEventSource events("/events");

void setup(){
  // setup ......
  events.onConnect([](AsyncEventSourceClient *client){
    if(client->lastId()){
      Serial.printf("Client reconnected! Last message ID that it gat is: %u\n", client->lastId());
    }
    //send event with message "hello!", id current millis
    // and set reconnect delay to 1 second
    client->send("hello!",NULL,millis(),1000);
  });
  //HTTP Basic authentication
  events.setAuthentication("user", "pass");
  server.addHandler(&events);
  // setup ......
}

void loop(){
  if(eventTriggered){ // your logic here
    //send event "myevent"
    events.send("my event content","myevent",millis());
  }
}

Setup Event Source in the browser

if (!!window.EventSource) {
  var source = new EventSource('/events');

  source.addEventListener('open', function(e) {
    console.log("Events Connected");
  }, false);

  source.addEventListener('error', function(e) {
    if (e.target.readyState != EventSource.OPEN) {
      console.log("Events Disconnected");
    }
  }, false);

  source.addEventListener('message', function(e) {
    console.log("message", e.data);
  }, false);

  source.addEventListener('myevent', function(e) {
    console.log("myevent", e.data);
  }, false);
}

Scanning for available WiFi Networks

//First request will return 0 results unless you start scan from somewhere else (loop/setup)
//Do not request more often than 3-5 seconds
server.on("/scan", HTTP_GET, [](AsyncWebServerRequest *request){
  String json = "[";
  int n = WiFi.scanComplete();
  if(n == -2){
    WiFi.scanNetworks(true);
  } else if(n){
    for (int i = 0; i < n; ++i){
      if(i) json += ",";
      json += "{";
      json += "\"rssi\":"+String(WiFi.RSSI(i));
      json += ",\"ssid\":\""+WiFi.SSID(i)+"\"";
      json += ",\"bssid\":\""+WiFi.BSSIDstr(i)+"\"";
      json += ",\"channel\":"+String(WiFi.channel(i));
      json += ",\"secure\":"+String(WiFi.encryptionType(i));
      json += ",\"hidden\":"+String(WiFi.isHidden(i)?"true":"false");
      json += "}";
    }
    WiFi.scanDelete();
    if(WiFi.scanComplete() == -2){
      WiFi.scanNetworks(true);
    }
  }
  json += "]";
  request->send(200, "application/json", json);
  json = String();
});

Remove handlers and rewrites

Server goes through handlers in same order as they were added. You can't simple add handler with same path to override them. To remove handler:

// save callback for particular URL path
auto handler = server.on("/some/path", [](AsyncWebServerRequest *request){
  //do something useful
});
// when you don't need handler anymore remove it
server.removeHandler(&handler);

// same with rewrites
server.removeRewrite(&someRewrite);

server.onNotFound([](AsyncWebServerRequest *request){
  request->send(404);
});

// remove server.onNotFound handler
server.onNotFound(NULL);

// remove all rewrites, handlers and onNotFound/onFileUpload/onRequestBody callbacks
server.reset();

Setting up the server

#include "ESPAsyncTCP.h"
#include "ESPAsyncWebServer.h"

AsyncWebServer server(80);
AsyncWebSocket ws("/ws"); // access at ws://[esp ip]/ws
AsyncEventSource events("/events"); // event source (Server-Sent events)

const char* ssid = "your-ssid";
const char* password = "your-pass";
const char* http_username = "admin";
const char* http_password = "admin";

//flag to use from web update to reboot the ESP
bool shouldReboot = false;

void onRequest(AsyncWebServerRequest *request){
  //Handle Unknown Request
  request->send(404);
}

void onBody(AsyncWebServerRequest *request, uint8_t *data, size_t len, size_t index, size_t total){
  //Handle body
}

void onUpload(AsyncWebServerRequest *request, String filename, size_t index, uint8_t *data, size_t len, bool final){
  //Handle upload
}

void onEvent(AsyncWebSocket * server, AsyncWebSocketClient * client, AwsEventType type, void * arg, uint8_t *data, size_t len){
  //Handle WebSocket event
}

void setup(){
  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  if (WiFi.waitForConnectResult() != WL_CONNECTED) {
    Serial.printf("WiFi Failed!\n");
    return;
  }

  // attach AsyncWebSocket
  ws.onEvent(onEvent);
  server.addHandler(&ws);

  // attach AsyncEventSource
  server.addHandler(&events);

  // respond to GET requests on URL /heap
  server.on("/heap", HTTP_GET, [](AsyncWebServerRequest *request){
    request->send(200, "text/plain", String(ESP.getFreeHeap()));
  });

  // upload a file to /upload
  server.on("/upload", HTTP_POST, [](AsyncWebServerRequest *request){
    request->send(200);
  }, onUpload);

  // send a file when /index is requested
  server.on("/index", HTTP_ANY, [](AsyncWebServerRequest *request){
    request->send(SPIFFS, "/index.htm");
  });

  // HTTP basic authentication
  server.on("/login", HTTP_GET, [](AsyncWebServerRequest *request){
    if(!request->authenticate(http_username, http_password))
        return request->requestAuthentication();
    request->send(200, "text/plain", "Login Success!");
  });

  // Simple Firmware Update Form
  server.on("/update", HTTP_GET, [](AsyncWebServerRequest *request){
    request->send(200, "text/html", "<form method='POST' action='/update' enctype='multipart/form-data'><input type='file' name='update'><input type='submit' value='Update'></form>");
  });
  server.on("/update", HTTP_POST, [](AsyncWebServerRequest *request){
    shouldReboot = !Update.hasError();
    AsyncWebServerResponse *response = request->beginResponse(200, "text/plain", shouldReboot?"OK":"FAIL");
    response->addHeader("Connection", "close");
    request->send(response);
  },[](AsyncWebServerRequest *request, String filename, size_t index, uint8_t *data, size_t len, bool final){
    if(!index){
      Serial.printf("Update Start: %s\n", filename.c_str());
      Update.runAsync(true);
      if(!Update.begin((ESP.getFreeSketchSpace() - 0x1000) & 0xFFFFF000)){
        Update.printError(Serial);
      }
    }
    if(!Update.hasError()){
      if(Update.write(data, len) != len){
        Update.printError(Serial);
      }
    }
    if(final){
      if(Update.end(true)){
        Serial.printf("Update Success: %uB\n", index+len);
      } else {
        Update.printError(Serial);
      }
    }
  });

  // attach filesystem root at URL /fs
  server.serveStatic("/fs", SPIFFS, "/");

  // Catch-All Handlers
  // Any request that can not find a Handler that canHandle it
  // ends in the callbacks below.
  server.onNotFound(onRequest);
  server.onFileUpload(onUpload);
  server.onRequestBody(onBody);

  server.begin();
}

void loop(){
  if(shouldReboot){
    Serial.println("Rebooting...");
    delay(100);
    ESP.restart();
  }
  static char temp[128];
  sprintf(temp, "Seconds since boot: %u", millis()/1000);
  events.send(temp, "time"); //send event "time"
}

Setup global and class functions as request handlers

#include <Arduino.h>
#include <ESPAsyncWebserver.h>
#include <Hash.h>
#include <functional>

void handleRequest(AsyncWebServerRequest *request){}

class WebClass {
public :
  AsyncWebServer classWebServer = AsyncWebServer(81);

  WebClass(){};

  void classRequest (AsyncWebServerRequest *request){}

  void begin(){
    // attach global request handler
    classWebServer.on("/example", HTTP_ANY, handleRequest);

    // attach class request handler
    classWebServer.on("/example", HTTP_ANY, std::bind(&WebClass::classRequest, this, std::placeholders::_1));
  }
};

AsyncWebServer globalWebServer(80);
WebClass webClassInstance;

void setup() {
  // attach global request handler
  globalWebServer.on("/example", HTTP_ANY, handleRequest);

  // attach class request handler
  globalWebServer.on("/example", HTTP_ANY, std::bind(&WebClass::classRequest, webClassInstance, std::placeholders::_1));
}

void loop() {

}

Methods for controlling websocket connections

  // Disable client connections if it was activated
  if ( ws.enabled() )
    ws.enable(false);

  // enable client connections if it was disabled
  if ( !ws.enabled() )
    ws.enable(true);

Example of OTA code

  // OTA callbacks
  ArduinoOTA.onStart([]() {
    // Clean SPIFFS
    SPIFFS.end();

    // Disable client connections    
    ws.enable(false);

    // Advertise connected clients what's going on
    ws.textAll("OTA Update Started");

    // Close them
    ws.closeAll();

  });

Adding Default Headers

In some cases, such as when working with CORS, or with some sort of custom authentication system, you might need to define a header that should get added to all responses (including static, websocket and EventSource). The DefaultHeaders singleton allows you to do this.

Example:

DefaultHeaders::Instance().addHeader("Access-Control-Allow-Origin", "*");
webServer.begin();

NOTE: You will still need to respond to the OPTIONS method for CORS pre-flight in most cases. (unless you are only using GET)

This is one option:

webServer.onNotFound([](AsyncWebServerRequest *request) {
  if (request->method() == HTTP_OPTIONS) {
    request->send(200);
  } else {
    request->send(404);
  }
});

Path variable

With path variable you can create a custom regex rule for a specific parameter in a route. For example we want a sensorId parameter in a route rule to match only a integer.

  server.on("^\\/sensor\\/([0-9]+)$", HTTP_GET, [] (AsyncWebServerRequest *request) {
      String sensorId = request->pathArg(0);
  });

NOTE: All regex patterns starts with ^ and ends with $

To enable the Path variable support, you have to define the buildflag -DASYNCWEBSERVER_REGEX.

For Arduino IDE create/update platform.local.txt:

Windows: C:\Users(username)\AppData\Local\Arduino15\packages\{espxxxx}\hardware\espxxxx\{version}\platform.local.txt

Linux: ~/.arduino15/packages/{espxxxx}/hardware/{espxxxx}/{version}/platform.local.txt

Add/Update the following line:

  compiler.cpp.extra_flags=-DDASYNCWEBSERVER_REGEX

For platformio modify platformio.ini:

[env:myboard]
build_flags = 
  -DASYNCWEBSERVER_REGEX

NOTE: By enabling ASYNCWEBSERVER_REGEX, <regex> will be included. This will add an 100k to your binary.

cocos/cocos-engine

Cocos simplifies game creation and distribution with Cocos Creator, a free, open-source, cross-platform game engine. Empowering millions of developers to create high-performance, engaging 2D/3D games and instant web entertainment.


Cocos Creator Logo

stars forks license twitter

Engine for Cocos Creator

Cocos Creator is the new generation of game development tool in Cocos family, it brings a complete set of 3D and 2D features while providing an intuitive, low cost and collaboration friendly workflow to game developers. Cocos Engine is the runtime framework for Cocos Creator editor.

image

Cocos Creator inherited many good qualities and cool features from its previous versions, such as high performance low level C++ implementation, intuitive editor, cross-platform support. It supports native platforms, web platforms and rapidly expanding instant gaming platforms, including Windows, Mac, iOS, Android, HarmonyOS, Web, Facebook Instant Games, WeChat Mini Game and TikTok Mini Games.

Furthermore, Cocos Creator has pushed the engine technology to a whole new level for high performance with scalability on various platforms, full extensibility and easy development.

  1. Modern Graphics: The GFX implementation is designed to adapt to the modern graphics APIs, it uses Vulkan on Windows and Android, Metal on Mac OS and iOS, WebGL on Web platform.
  2. High Performance: The runtime engine is built with half C++ and half TypeScript, low level infrastructure, native platform adaptation, renderer and scene management are all written in C++ to ensure high runtime performance. We continue to move heavy lifting work to native as much as possible.
  3. Customizable Render Pipeline: The render pipeline is designed to be fully customizable, it has supported the builtin forward and deferred render pipeline across all platforms. Developers can customize their own render pipeline following the same approach.
  4. Extensible Surface Shader: The material system is built on Cocos effect format which uses GLSL 300, the shader programs will be converted to suitable runtime format automatically. The surface shader permit to fully customize the surface material while ensuring universal lighting model.
  5. Physically Based Rendering (PBR): The standard effect adopts physically based rendering, along with the physically based camera and the lighting based on physical metrics, developers can easily achieve realistic and seamless rendering results across different environment.
  6. Easy TypeScript API: The user level API set is provided in TypeScript, along with the powerful VSCode editor, development with Cocos Creator is incredibly efficient.

Besides all these highlights, Cocos Creator also provides builtin animation system, physics system, particle system, terrain editing support, complex UI system, instant preview etc.

image

This open source repository is the runtime engine of Cocos Creator, the engine is naturally integrated within Cocos Creator, designed to only be the essential runtime library and not to be used independently.

Development and Contribution Notice

Cocos Creator engine is open source and welcomes community participation, for open source engine development with Cocos Creator editor, you should fork this repository and setup custom engine in the editor.

Prerequisite

Clone

Clone this repository into your local environment.

Install

In the cloned engine folder, run the following command to setup development environment:

# download & build engine dependencies
npm install

This is all you have to do to setup engine development environment.

Build

  • If running inside Cocos Creator, the engine will automatically compile and build after the editor window is opened. For more instructions on modifying the engine in Cocos Creator, please refer to Engine Customization Workflow.
  • Outside the editor, you need to run the following command to build:
npm run build

Please refer to native readme if you want to develop native applications.

Contribution

You can contribute to the Cocos Creator open source engine in many ways, they are very much appreciated:

  1. Report bug or feature requests by creating an issue.
  2. Participate discussions in the issues.
  3. Create a pull request if you have fixed or improved anything, implemented any features.
  4. Improve the documentations with pull request to the usage documentation repository.
  5. Help other developers in our Forum.

Contribution notice

If you are trying to make a pull request, there are some requirements that must be met so that your pull request can be accepted:

  1. Follow our Cpp Coding Style Guide and TypeScript Coding Style Reference.
  2. Try to integrate ESLint and CPP auto fix tools in your coding environment.
  3. Link related issues or discussions in your pull request and clearly state the purpose of your pull request.
  4. Pass all automatic continuous integration tests.
  5. Request file owner or engine developers to review your pull request.
  6. Get one valid approval from the engine architects.

Example Project

  • Mind Your Step 3D: Beginner's step-by-step tutorial project repo.
  • Test Cases: Unit test scenes for every engine module.
  • Example Cases: Simple yet expressive demo scenes for baseline testing and topic-specific case study.
  • Awesome Cocos: You can find out other useful tools and show cases here.

Links

telegramdesktop/tdesktop

Telegram Desktop messaging app


Telegram Desktop – Official Messenger

This is the complete source code and the build instructions for the official Telegram messenger desktop client, based on the Telegram API and the MTProto secure protocol.

Version Build Status Build Status Build Status

Preview of Telegram Desktop

The source code is published under GPLv3 with OpenSSL exception, the license is available here.

Supported systems

The latest version is available for

Old system versions

Version 4.9.9 was the last that supports older systems

Version 2.4.4 was the last that supports older systems

Version 1.8.15 was the last that supports older systems

Third-party

Build instructions

R3nzTheCodeGOD/R3nzSkin

Skin changer for League of Legends (LOL)


C++ LOL Windows x64 License Issues Windows

R3nzSkin

Announcement

R3nzSkin is no longer supported due to Riot Games implementing Valorant's Vanguard anti-cheat to League of Legends

R3nzSkin is an internal skin changer for League of Legends.

  • Change the skin of your champion, your ward, other champions, towers, minions, and jungle monsters in the game.
  • Automatic skin database update.
  • Support for spectator mode.
  • Change skins anytime and unlimited times in a single game.
  • Supports all popular languages ​​in the world.
  • In-game configuration with ImGui.
  • JSON based configuration saving & loading

Building

  1. Clone the source with git clone --recursive https://github.com/R3nzTheCodeGOD/R3nzSkin.git
  2. Build in Visual Studio 2019/2022 with configuration "Your Region - x64"

Usage

  1. Compile source or download a compiled version.
  2. Use R3nzSkin_Injector.exe or inject the built DLL into the game yourself.
    • Administrator privilege may be needed if the injector failed to inject the DLL.
    • League client can crash if R3nzSkin is injected before being in the game.
      • A workaround is to not inject until you are in the game (you will need to be fast to not disrupt the game).
  3. Press Insert to bring up the menu.
  4. Select the skin for you, your teammates, enemies, and wards.

Further optimizations

If your CPU supports AVX / AVX2 / AVX-512 instruction set, you can enable it in project settings. This should result in more performant code, optimized for your CPU. Currently SSE2 instructions are selected in project settings.

Credits

This program is an improved and updated version of the B3akers/LeagueSkinChanger project.

defold/defold

Defold is a completely free to use game engine for development of desktop, mobile and web games.


CI - Main CI - Editor Only CI - Engine nightly

Join the chat at https://discord.gg/cHBde7J

Defold

Repository for the Defold engine, editor and command line tools.

Supported by

Folder Structure

  • build_tools - Build configuration and build tools used by build scripts
  • ci - Continuous integration files for GitHub CI (more info)
  • com.dynamo.cr - Bob
  • engine - Engine
  • editor - Editor
  • packages - External packages
  • scripts - Build and utility scripts
  • share - Misc shared stuff used by other tools. Waf build-scripts, valgrind suppression files, etc.

Setup and Build

Setup Engine

Follow the setup guide to install all of the tools needed to build the Defold engine.

Build Engine

Follow the build instructions to build the engine and command line tools.

Setup, Build and Run Editor

Follow the instructions in the editor folder.

Engine Overview

An overview of the engine architecture and additional engine information can be viewed here.

Platform Specifics

Releasing a new version

The release process is documented here.

Complying with licenses

A full list of third party software licenses along with information on how to give attribution and include the licenses in your game can be found in the COMPLYING WITH LICENSES document in the Defold repository on GitHub.

autowarefoundation/autoware.universe


Autoware Universe

Welcome to Autoware Universe

Autoware Universe serves as a foundational pillar within the Autoware ecosystem, playing a critical role in enhancing the core functionalities of autonomous driving technologies. This repository is a pivotal element of the Autoware Core/Universe concept, managing a wide array of packages that significantly extend the capabilities of autonomous vehicles.

autoware_universe_front

Getting Started

To dive into the vast world of Autoware and understand how Autoware Universe fits into the bigger picture, we recommend starting with the Autoware Documentation. This resource provides a thorough overview of the Autoware ecosystem, guiding you through its components, functionalities, and how to get started with development.

Explore Autoware Universe documentation

For those looking to explore the specifics of Autoware Universe components, the Autoware Universe Documentation, deployed with MKDocs, offers detailed insights.

apple/swift

The Swift Programming Language


Swift logo

Swift Programming Language

Architecture Build
macOS x86_64 Build Status
Ubuntu 20.04 x86_64 Build Status
Ubuntu 20.04 AArch64 Build Status
Ubuntu 22.04 x86_64 Build Status
Ubuntu 22.04 AArch64 Build Status
CentOS 7 x86_64 Build Status
Amazon Linux 2 x86_64 Build Status
Amazon Linux 2 AArch64 Build Status
Universal Base Image 9 x86_64 Build Status
Debian 12 x86_64 Build Status
Debian 12 AArch64 Build Status
Fedora 39 x86_64 Build Status
Fedora 39 AArch64 Build Status
Windows 10 x86_64 Build Status
Windows 10 ARM64 Build Status

Cross-Compilation Targets

Target Build
wasm32-unknown-wasi Build Status

Swift Community-Hosted CI Platforms

OS Architecture Build
Android ARMv7 Build Status
Android AArch64 Build Status
Windows 2019 (VS 2019) x86_64 Build Status

Welcome to Swift

Swift is a high-performance system programming language. It has a clean and modern syntax, offers seamless access to existing C and Objective-C code and frameworks, and is memory-safe by default.

Although inspired by Objective-C and many other languages, Swift is not itself a C-derived language. As a complete and independent language, Swift packages core features like flow control, data structures, and functions, with high-level constructs like objects, protocols, closures, and generics. Swift embraces modules, eliminating the need for headers and the code duplication they entail.

To learn more about the programming language, visit swift.org.

Contributing to Swift

Contributions to Swift are welcomed and encouraged! Please see the Contributing to Swift guide.

To be a truly great community, Swift.org needs to welcome developers from all walks of life, with different backgrounds, and with a wide range of experience. A diverse and friendly community will have more great ideas, more unique perspectives, and produce more great code. We will work diligently to make the Swift community welcoming to everyone.

To give clarity of what is expected of our members, Swift has adopted the code of conduct defined by the Contributor Covenant. This document is used across many open source communities, and we think it articulates our values well. For more, see the Code of Conduct.

Getting Started

If you are interested in:

We also have an FAQ that answers common questions.

Swift Toolchains

Building

Swift toolchains are created using the script build-toolchain. This script is used by swift.org's CI to produce snapshots and can allow for one to locally reproduce such builds for development or distribution purposes. A typical invocation looks like the following:

  $ ./swift/utils/build-toolchain $BUNDLE_PREFIX

where $BUNDLE_PREFIX is a string that will be prepended to the build date to give the bundle identifier of the toolchain's Info.plist. For instance, if $BUNDLE_PREFIX was com.example, the toolchain produced will have the bundle identifier com.example.YYYYMMDD. It will be created in the directory you run the script with a filename of the form: swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz.

Beyond building the toolchain, build-toolchain also supports the following (non-exhaustive) set of useful options:

  • --dry-run: Perform a dry run build. This is off by default.
  • --test: Test the toolchain after it has been compiled. This is off by default.
  • --distcc: Use distcc to speed up the build by distributing the C++ part of the swift build. This is off by default.
  • --sccache: Use sccache to speed up subsequent builds of the compiler by caching more C++ build artifacts. This is off by default.

More options may be added over time. Please pass --help to build-toolchain to see the full set of options.

Installing into Xcode

On macOS if one wants to install such a toolchain into Xcode:

  1. Untar and copy the toolchain to one of /Library/Developer/Toolchains/ or ~/Library/Developer/Toolchains/. E.g.:
  $ sudo tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz -C /
  $ tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz -C ~/

The script also generates an archive containing debug symbols which can be installed over the main archive allowing symbolication of any compiler crashes.

  $ sudo tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx-symbols.tar.gz -C /
  $ tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx-symbols.tar.gz -C ~/
  1. Specify the local toolchain for Xcode's use via Xcode->Toolchains.

Build Failures

Try the suggestions in Troubleshooting build issues.

Make sure you are using the correct release of Xcode.

If you have changed Xcode versions but still encounter errors that appear to be related to the Xcode version, try passing --clean to build-script.

When a new version of Xcode is released, you can update your build without recompiling the entire project by passing --reconfigure to build-script.

Learning More

Be sure to look at the documentation index for a bird's eye view of the available documentation. In particular, the documents titled Debugging the Swift Compiler and Continuous Integration for Swift are very helpful to understand before submitting your first PR.

❌