26 KiB
Microphone considerations
The critical element is the microphone quality : a Boya By-lm 40 or clippy EM272 (with a very good aux-usb converter) is key to improve the quality of detections. Here is some example tests I did (whole threads are really interesting also): https://github.com/mcguirepr89/BirdNET-Pi/discussions/39#discussioncomment-9706951 https://github.com/mcguirepr89/BirdNET-Pi/discussions/1092#discussioncomment-9706191
My recommendation :
- Best entry system (< 50€) : Boya By-lm40 (30€) + deadcat (10 €)
- Best middle end system (< 150 €) : Clippy EM272 (55€) + Rode AI micro trrs to usb (70€) + Rycote deadcat (27€)
- Best high end system (<400 €) : Clippy EM272 XLR (85€) or LOM Ucho Pro (75€) + Focusrite Scarlet 2i2 4th Gen (200€) + Bubblebee Pro Extreme deadcat (45€)
App settings recommendation
I've tested lots of settings by running 2 versions of my HA birdnet-pi addon in parallel using the same rtsp feed, and comparing impact of parameters. My conclusions aren't universal, as it seems to be highly dependent on the region and type of mic used. For example, the old model seems to be better in Australia, while the new one better in Europe.
- Model
- Version : 6k_v2,4 (performs better in Europe at least, the 6k performs better in Australia)
- Species range model : v1 (uncheck v2.4 ; seems more robust in Europe)
- Species occurence threshold : 0,001 (was 0,00015 using v2.4 ; use the Species List Tester to check the correct value for you)
- Audio settings
- Default
- Channel : 1 (doesn't really matter as analysis is made on mono signal ; 1 allows decreased saved audio size but seems to give slightly messed up spectrograms in my experience)
- Recording Length : 18 (that's because I use an overlap of 0,5 ; so it analysis 0-3s ; 2,5-5,5s ; 5-8s ; 7,5-10,5 ; 10-13 ; 12,5-15,5 ; 15-18)
- Extraction Length : 9s (could be 6, but I like to hear my birds :-))
- Audio format : mp3 (why bother with something else)
- Birdnet-lite settings
- Overlap : 0,5s
- Minimum confidence : 0,7
- Sigmoid sensitivity : 1,25 (I've tried 1,00 but it gave much more false positives ; as decreasing this value increases sensitivity)
Set RTSP server (https://github.com/mcguirepr89/BirdNET-Pi/discussions/1006#discussioncomment-6747450)
On your desktop
- Download imager
- Install raspbian lite 64
With ssh, install requisite softwares
# Update
sudo apt-get update -y
sudo apt-get dist-upgrade -y
# Disable useless services
sudo systemctl disable hciuart
sudo systemctl disable bluetooth
sudo systemctl disable triggerhappy
sudo systemctl disable avahi-daemon
sudo systemctl disable dphys-swapfile
# Install RTSP server
sudo apt-get install -y micro ffmpeg lsof
sudo -s cd /root && wget -c https://github.com/bluenviron/mediamtx/releases/download/v1.9.1/mediamtx_v1.9.1_linux_arm64v8.tar.gz -O - | sudo tar -xz
Configure Audio
Find right device
# List audio devices
arecord -l
# Check audio device parameters. Example :
arecord -D hw:1,0 --dump-hw-params
Add startup script
sudo nano startmic.sh && chmod +x startmic.sh
#!/bin/bash
echo "Starting birdmic"
# Disable gigabit ethernet
sudo ethtool -s eth0 speed 100 duplex full autoneg on
# Run GStreamer RTSP server if installed
if command -v gst-launch-1.0 &>/dev/null; then
./rtsp_audio_server.py --device plughw:0,0 --format S16LE --rate 96000 --channels 2 --mount-point /birdmic --port 8554 >/tmp/log_rtsp 2>/tmp/log_rtsp_error &
gst_pid=$!
else
echo "GStreamer not found, skipping to ffmpeg fallback"
gst_pid=0
fi
# Wait for a moment to check if the process fails
sleep 5
# Check if GStreamer is still running using ps aux
if [ "$gst_pid" -ne 0 ] && ! ps aux | grep "[r]tsp_audio_server.py" > /dev/null; then
echo "GStreamer failed, switching to ffmpeg"
# Start mediamtx first and give it a moment to initialize
./mediamtx &
sleep 5
# Run ffmpeg as fallback if GStreamer failed
ffmpeg -nostdin -use_wallclock_as_timestamps 1 -fflags +genpts -f alsa -acodec pcm_s16be -ac 2 -ar 96000 \
-i plughw:0,0 -ac 2 -f rtsp -acodec pcm_s16be rtsp://localhost:8554/birdmic -rtsp_transport tcp \
-buffer_size 512k 2>/tmp/rtsp_error &
else
echo "GStreamer is running successfully"
fi
# Set microphone volume
sleep 5
MICROPHONE_NAME="Line In 1 Gain" # for Focusrite Scarlett 2i2
sudo amixer -c 0 sset "$MICROPHONE_NAME" 40
sleep 60
# Run focusrite and autogain scripts if present
if [ -f "$HOME/focusrite.sh" ]; then
"$HOME/focusrite.sh" >/tmp/log_focusrite 2>/tmp/log_focusrite_error &
fi
if [ -f "$HOME/autogain.py" ]; then
python autogain.py >/tmp/log_autogain 2>/tmp/log_autogain_error &
fi
Optional : use gstreamer instead of ffmpeg
# Install gstreamer
sudo apt-get update
sudo apt-get install -y \
gstreamer1.0-rtsp \
gstreamer1.0-tools \
gstreamer1.0-alsa \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav
Create a script named rtsp_audio_server.py
#!/usr/bin/env python3
import sys
import gi
import argparse
import socket
import logging
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GLib
# Initialize GStreamer
Gst.init(None)
def get_lan_ip():
"""
Retrieves the LAN IP address by creating a dummy connection.
"""
try:
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
# This doesn't send any data; it's just used to get the local IP address
s.connect(("8.8.8.8", 80))
return s.getsockname()[0]
except Exception as e:
logging.error(f"Failed to get LAN IP address: {e}")
return "127.0.0.1"
class PCMStream(GstRtspServer.RTSPMediaFactory):
def __init__(self, device, format, rate, channels):
super(PCMStream, self).__init__()
self.device = device
self.format = format
self.rate = rate
self.channels = channels
self.set_shared(True)
def do_create_element(self, url):
"""
Overridden method to create the GStreamer pipeline.
"""
# Attempt to retrieve and log the RTSP URL's URI
try:
# Some versions might have 'get_uri()', others might not
uri = url.get_uri()
logging.info(f"Creating pipeline for URL: {uri}")
except AttributeError:
# Fallback if 'get_uri()' doesn't exist
logging.info("Creating pipeline for RTSP stream.")
# Define the GStreamer pipeline string for PCM streaming
pipeline_str = (
f"alsasrc device={self.device} ! "
f"audio/x-raw, format={self.format}, rate={self.rate}, channels={self.channels} ! "
"audioconvert ! audioresample ! "
"rtpL16pay name=pay0 pt=96"
)
logging.info(f"Pipeline: {pipeline_str}")
# Parse and launch the pipeline
pipeline = Gst.parse_launch(pipeline_str)
if not pipeline:
logging.error("Failed to create GStreamer pipeline.")
return None
# Get the bus from the pipeline and connect to the message handler
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", self.on_message)
return pipeline
def on_message(self, bus, message):
t = message.type
if t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
logging.error(f"GStreamer Error: {err}, {debug}")
elif t == Gst.MessageType.WARNING:
err, debug = message.parse_warning()
logging.warning(f"GStreamer Warning: {err}, {debug}")
elif t == Gst.MessageType.EOS:
logging.info("End-Of-Stream reached.")
return True
class GstServer:
def __init__(self, mount_point, device, format, rate, channels, port, ip=None):
self.mount_point = mount_point
self.device = device
self.format = format
self.rate = rate
self.channels = channels
self.port = port
self.ip = ip
self.server = GstRtspServer.RTSPServer()
self.server.set_service(str(self.port))
if self.ip:
self.server.set_address(self.ip)
else:
self.server.set_address("0.0.0.0")
self.factory = PCMStream(self.device, self.format, self.rate, self.channels)
self.mount_points = self.server.get_mount_points()
self.mount_points.add_factory(self.mount_point, self.factory)
try:
self.server.attach(None)
except Exception as e:
logging.error(f"Failed to attach RTSP server: {e}")
sys.exit(1)
server_ip = self.ip if self.ip else get_lan_ip()
# Verify that the server is listening on the desired port
if not self.verify_server_binding():
logging.error(f"RTSP server failed to bind to port {self.port}. It might already be in use.")
sys.exit(1)
print(f"RTSP server is live at rtsp://{server_ip}:{self.port}{self.mount_point}")
def verify_server_binding(self):
"""
Verifies if the RTSP server is successfully listening on the specified port.
"""
try:
with socket.create_connection(("127.0.0.1", self.port), timeout=2):
return True
except Exception as e:
logging.error(f"Verification failed: {e}")
return False
def parse_args():
parser = argparse.ArgumentParser(description="GStreamer RTSP Server for 16-bit PCM Audio")
parser.add_argument(
'--device', type=str, default='plughw:0,0',
help='ALSA device to capture audio from (default: plughw:0,0)'
)
parser.add_argument(
'--format', type=str, default='S16LE',
help='Audio format (default: S16LE)'
)
parser.add_argument(
'--rate', type=int, default=44100,
help='Sampling rate in Hz (default: 44100)'
)
parser.add_argument(
'--channels', type=int, default=1,
help='Number of audio channels (default: 1)'
)
parser.add_argument(
'--mount-point', type=str, default='/birdmic',
help='RTSP mount point (default: /birdmic)'
)
parser.add_argument(
'--port', type=int, default=8554,
help='RTSP server port (default: 8554)'
)
parser.add_argument(
'--ip', type=str, default=None,
help='Explicit LAN IP address to bind the RTSP server to (default: auto-detected)'
)
return parser.parse_args()
def main():
# Configure logging to display errors and warnings
logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')
args = parse_args()
try:
server = GstServer(
mount_point=args.mount_point,
device=args.device,
format=args.format,
rate=args.rate,
channels=args.channels,
port=args.port,
ip=args.ip
)
except Exception as e:
logging.error(f"Failed to initialize RTSP server: {e}")
sys.exit(1)
loop = GLib.MainLoop()
try:
loop.run()
except KeyboardInterrupt:
print("Shutting down RTSP server...")
loop.quit()
except Exception as e:
logging.error(f"An unexpected error occurred: {e}")
loop.quit()
if __name__ == "__main__":
main()
Optional : Startup automatically
chmod +x startmic.sh
crontab -e # select nano as your editor
Paste in @reboot $HOME/startmic.sh then save and exit nano.
Reboot the Pi and test again with VLC to make sure the RTSP stream is live.
Optional : optimize config.txt
sudo nano /boot/firmware/config.txt
# Enable audio and USB optimizations
dtparam=audio=off # Disable the default onboard audio to prevent conflicts
dtoverlay=disable-bt # Disable onboard Bluetooth to reduce USB bandwidth usage
dtoverlay=disable-wifi # Disable onboard wifi
# Limit Ethernet to 100 Mbps (disable Gigabit Ethernet)
dtparam=eth_max_speed=100
# USB optimizations
dwc_otg.fiq_fix_enable=1 # Enable FIQ (Fast Interrupt) handling for improved USB performance
max_usb_current=1 # Increase the available USB current (required if Scarlett is powered over USB)
# Additional audio settings (for low-latency operation)
avoid_pwm_pll=1 # Use a more stable PLL for the audio clock
# Optional: HDMI and other settings can be turned off if not needed
hdmi_blanking=1 # Disable HDMI (save power and reduce interference)
Optional : install Focusrite driver
sudo apt-get install make linux-headers-$(uname -r)
curl -LO https://github.com/geoffreybennett/scarlett-gen2/releases/download/v6.9-v1.3/snd-usb-audio-kmod-6.6-v1.3.tar.gz
tar -xzf snd-usb-audio-kmod-6.6-v1.3.tar.gz
cd snd-usb-audio-kmod-6.6-v1.3
KSRCDIR=/lib/modules/$(uname -r)/build
make -j4 -C $KSRCDIR M=$(pwd) clean
make -j4 -C $KSRCDIR M=$(pwd)
sudo make -j4 -C $KSRCDIR M=$(pwd) INSTALL_MOD_DIR=updates/snd-usb-audio modules_install
sudo depmod
sudo reboot
dmesg | grep -A 5 -B 5 -i focusrite
Optional : add RAM disk
sudo cp /usr/share/systemd/tmp.mount /etc/systemd/system/tmp.mount
sudo systemctl enable tmp.mount
sudo systemctl start tmp.mount
Optional : Configuration for Focusrite Scarlett 2i2
Add this content in "$HOME/focusrite.sh" && chmod +x "$HOME/focusrite.sh"
#!/bin/bash
# Set PCM controls for capture
sudo amixer -c 0 cset numid=31 'Analogue 1' # 'PCM 01' - Set to 'Analogue 1'
sudo amixer -c 0 cset numid=32 'Analogue 1' # 'PCM 02' - Set to 'Analogue 1'
sudo amixer -c 0 cset numid=33 'Off' # 'PCM 03' - Disabled
sudo amixer -c 0 cset numid=34 'Off' # 'PCM 04' - Disabled
# Set DSP Input controls (Unused, set to Off)
sudo amixer -c 0 cset numid=29 'Off' # 'DSP Input 1'
sudo amixer -c 0 cset numid=30 'Off' # 'DSP Input 2'
# Configure Line In 1 as main input for mono setup
sudo amixer -c 0 cset numid=8 'Off' # 'Line In 1 Air' - Keep 'Off'
sudo amixer -c 0 cset numid=14 off # 'Line In 1 Autogain' - Disabled
sudo amixer -c 0 cset numid=6 'Line' # 'Line In 1 Level' - Set level to 'Line'
sudo amixer -c 0 cset numid=21 on # 'Line In 1 Safe' - Enabled to avoid clipping / noise impact ?
# Disable Line In 2 to minimize interference (if not used)
sudo amixer -c 0 cset numid=9 'Off' # 'Line In 2 Air'
sudo amixer -c 0 cset numid=17 off # 'Line In 2 Autogain' - Disabled
sudo amixer -c 0 cset numid=16 0 # 'Line In 2 Gain' - Set gain to 0 (mute)
sudo amixer -c 0 cset numid=7 'Line' # 'Line In 2 Level' - Set to 'Line'
sudo amixer -c 0 cset numid=22 off # 'Line In 2 Safe' - Disabled
# Set Line In 1-2 controls
sudo amixer -c 0 cset numid=12 off # 'Line In 1-2 Link' - No need to link for mono
sudo amixer -c 0 cset numid=10 on # 'Line In 1-2 Phantom Power' - Enabled for condenser mics
# Set Analogue Outputs to use the same mix for both channels (Mono setup)
sudo amixer -c 0 cset numid=23 'Mix A' # 'Analogue Output 01' - Set to 'Mix A'
sudo amixer -c 0 cset numid=24 'Mix A' # 'Analogue Output 02' - Same mix as Output 01
# Set Direct Monitor to off to prevent feedback
sudo amixer -c 0 cset numid=53 'Off' # 'Direct Monitor'
# Set Input Select to Input 1
sudo amixer -c 0 cset numid=11 'Input 1' # 'Input Select'
# Optimize Monitor Mix settings for mono output
sudo amixer -c 0 cset numid=54 153 # 'Monitor 1 Mix A Input 01' - Set to 153 (around -3.50 dB)
sudo amixer -c 0 cset numid=55 153 # 'Monitor 1 Mix A Input 02' - Set to 153 for balanced output
sudo amixer -c 0 cset numid=56 0 # 'Monitor 1 Mix A Input 03' - Mute unused channels
sudo amixer -c 0 cset numid=57 0 # 'Monitor 1 Mix A Input 04'
# Set Sync Status to Locked
sudo amixer -c 0 cset numid=52 'Locked' # 'Sync Status'
echo "Mono optimization applied. Only using primary input and balanced outputs."
Optional : Autogain script for microphone
Add this content in "$HOME/autogain.py" && chmod +x "$HOME/autogain.py"
#!/usr/bin/env python3
"""
Microphone Gain Adjustment Script
This script captures audio from an RTSP stream, processes it to calculate the RMS
within the 2000-4000 Hz frequency band, and adjusts the microphone gain based on
predefined noise thresholds and trends.
Dependencies:
- numpy
- scipy
- ffmpeg (installed and accessible in PATH)
- amixer (for microphone gain control)
Author: OpenAI ChatGPT
Date: 2024-04-27
"""
import subprocess
import numpy as np
from scipy.signal import butter, sosfilt
import time
import re
# ---------------------------- Configuration ----------------------------
# Microphone Settings
MICROPHONE_NAME = "Line In 1 Gain" # Adjust to match your microphone's control name
MIN_GAIN_DB = 20 # Minimum gain in dB
MAX_GAIN_DB = 45 # Maximum gain in dB
DECREASE_GAIN_STEP_DB = 1 # Gain decrease step in dB
INCREASE_GAIN_STEP_DB = 5 # Gain increase step in dB
# Noise Thresholds
NOISE_THRESHOLD_HIGH = 0.001 # Upper threshold for noise RMS amplitude
NOISE_THRESHOLD_LOW = 0.00035 # Lower threshold for noise RMS amplitude
# Trend Detection
TREND_COUNT_THRESHOLD = 1 # Number of consecutive trends needed to adjust gain
# RTSP Stream URL
RTSP_URL = "rtsp://192.168.178.124:8554/birdmic" # Replace with your RTSP stream URL
# Debug Mode (1 for enabled, 0 for disabled)
DEBUG = 1
# -----------------------------------------------------------------------
def debug(msg):
"""
Prints debug messages if DEBUG mode is enabled.
:param msg: The debug message to print.
"""
if DEBUG:
current_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
print(f"[{current_time}] [DEBUG] {msg}")
def get_gain_db(mic_name):
"""
Retrieves the current gain setting of the specified microphone using amixer.
:param mic_name: The name of the microphone control in amixer.
:return: The current gain in dB as a float, or None if retrieval fails.
"""
cmd = ['amixer', 'sget', mic_name]
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode()
# Regex to find patterns like [30.00dB]
match = re.search(r'\[(-?\d+(\.\d+)?)dB\]', output)
if match:
gain_db = float(match.group(1))
debug(f"Retrieved gain: {gain_db} dB")
return gain_db
else:
debug("No gain information found in amixer output.")
return None
except subprocess.CalledProcessError as e:
debug(f"amixer sget failed: {e}")
return None
def set_gain_db(mic_name, gain_db):
"""
Sets the gain of the specified microphone using amixer.
:param mic_name: The name of the microphone control in amixer.
:param gain_db: The desired gain in dB.
:return: True if the gain was set successfully, False otherwise.
"""
cmd = ['amixer', 'sset', mic_name, f'{gain_db}dB']
try:
subprocess.check_call(cmd, stderr=subprocess.STDOUT)
debug(f"Set gain to: {gain_db} dB")
return True
except subprocess.CalledProcessError as e:
debug(f"amixer sset failed: {e}")
return False
def calculate_noise_rms(rtsp_url, bandpass_sos, num_bins=5):
"""
Captures audio from an RTSP stream, applies a bandpass filter, divides the
audio into segments, and calculates the RMS of the quietest segment.
:param rtsp_url: The RTSP stream URL.
:param bandpass_sos: Precomputed bandpass filter coefficients (Second-Order Sections).
:param num_bins: Number of segments to divide the audio into.
:return: The RMS amplitude of the quietest segment as a float, or None on failure.
"""
cmd = [
'ffmpeg',
'-loglevel', 'error',
'-rtsp_transport', 'tcp',
'-i', rtsp_url,
'-vn',
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '32000',
'-ac', '1',
'-t', '5',
'-'
]
try:
debug(f"Starting audio capture from {rtsp_url}")
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode != 0:
debug(f"ffmpeg failed with error: {stderr.decode()}")
return None
# Convert raw PCM data to numpy array
audio = np.frombuffer(stdout, dtype=np.int16).astype(np.float32) / 32768.0
debug(f"Captured {len(audio)} samples from audio stream.")
if len(audio) == 0:
debug("No audio data captured.")
return None
# Apply bandpass filter
filtered = sosfilt(bandpass_sos, audio)
debug("Applied bandpass filter to audio data.")
# Divide into num_bins
total_samples = len(filtered)
bin_size = total_samples // num_bins
if bin_size == 0:
debug("Bin size is 0; insufficient audio data.")
return 0.0
trimmed_length = bin_size * num_bins
trimmed_filtered = filtered[:trimmed_length]
segments = trimmed_filtered.reshape(num_bins, bin_size)
debug(f"Divided audio into {num_bins} bins of {bin_size} samples each.")
# Calculate RMS for each segment
rms_values = np.sqrt(np.mean(segments ** 2, axis=1))
debug(f"Calculated RMS values for each segment: {rms_values}")
# Return the minimum RMS value
min_rms = rms_values.min()
debug(f"Minimum RMS value among segments: {min_rms}")
return min_rms
except Exception as e:
debug(f"Exception during noise RMS calculation: {e}")
return None
def main():
"""
Main loop that continuously monitors background noise and adjusts microphone gain.
"""
TREND_COUNT = 0
PREVIOUS_TREND = 0
# Precompute the bandpass filter coefficients
LOWCUT = 2000 # Lower frequency bound in Hz
HIGHCUT = 8000 # Upper frequency bound in Hz
FILTER_ORDER = 5 # Order of the Butterworth filter
sos = butter(FILTER_ORDER, [LOWCUT, HIGHCUT], btype='band', fs=44100, output='sos')
debug("Precomputed Butterworth bandpass filter coefficients.")
# Set the microphone gain to the maximum gain at the start
success = set_gain_db(MICROPHONE_NAME, MAX_GAIN_DB)
if success:
print(f"Microphone gain set to {MAX_GAIN_DB} dB at start.")
else:
print("Failed to set microphone gain at start. Exiting.")
return
while True:
min_rms = calculate_noise_rms(RTSP_URL, sos, num_bins=5)
if min_rms is None:
print("Failed to compute noise RMS. Retrying in 1 minute...")
time.sleep(60)
continue
if not isinstance(min_rms, (float, int)):
print(f"Invalid noise RMS output detected: {min_rms}. Retrying in 1 minute...")
time.sleep(60)
continue
# Print the final converted RMS amplitude (only once)
print(f"Converted RMS Amplitude: {min_rms}")
debug(f"Current background noise (RMS amplitude): {min_rms}")
# Determine the noise trend
if min_rms > NOISE_THRESHOLD_HIGH:
CURRENT_TREND = 1
elif min_rms < NOISE_THRESHOLD_LOW:
CURRENT_TREND = -1
else:
CURRENT_TREND = 0
debug(f"Current trend: {CURRENT_TREND}")
if CURRENT_TREND != 0:
if CURRENT_TREND == PREVIOUS_TREND:
TREND_COUNT += 1
else:
TREND_COUNT = 1
PREVIOUS_TREND = CURRENT_TREND
else:
TREND_COUNT = 0
debug(f"Trend count: {TREND_COUNT}")
CURRENT_GAIN_DB = get_gain_db(MICROPHONE_NAME)
if CURRENT_GAIN_DB is None:
print("Failed to get current gain level. Retrying in 1 minute...")
time.sleep(60)
continue
debug(f"Current gain: {CURRENT_GAIN_DB} dB")
if TREND_COUNT >= TREND_COUNT_THRESHOLD:
if CURRENT_TREND == 1:
# Decrease gain by 1 dB
NEW_GAIN_DB = CURRENT_GAIN_DB - DECREASE_GAIN_STEP_DB
if NEW_GAIN_DB < MIN_GAIN_DB:
NEW_GAIN_DB = MIN_GAIN_DB
success = set_gain_db(MICROPHONE_NAME, NEW_GAIN_DB)
if success:
print(f"Decreased gain to {NEW_GAIN_DB} dB")
debug(f"Gain adjusted to {NEW_GAIN_DB} dB")
else:
print("Failed to set new gain.")
elif CURRENT_TREND == -1:
# Increase gain by 5 dB
NEW_GAIN_DB = CURRENT_GAIN_DB + INCREASE_GAIN_STEP_DB
if NEW_GAIN_DB > MAX_GAIN_DB:
NEW_GAIN_DB = MAX_GAIN_DB
success = set_gain_db(MICROPHONE_NAME, NEW_GAIN_DB)
if success:
print(f"Increased gain to {NEW_GAIN_DB} dB")
debug(f"Gain adjusted to {NEW_GAIN_DB} dB")
else:
print("Failed to set new gain.")
TREND_COUNT = 0
else:
debug("No gain adjustment needed.")
# Sleep for 1 minute before the next iteration
time.sleep(60)
if __name__ == "__main__":
main()