So I discovered the main culpret and really it was quite simple, but was somthing that I completley overlooked. As was sugguested the problem did in face lie in the player height. While half the screen height is a way to represent the player height, that would be in screen space. FlatLnDist needed the player height in world space which is TILE_SIZE / 2. So now I've added the player's world space height to the player struct and the formula now looks like this:
flatLnDist = (player.height * DIST_PROJ_PLANE) / r;
I also added ceiling rendering like so:
ceilingY = (GAMEWINDOW_HEIGHT / 2) - r;
Because each vertical scanline, both in the floor and ceiling have the same pixel x cordinate and all the rest of the math for the ceiling is already done by the floor, I only have to modify the pixel y cordinate to esenitally mirror the floor projection over the horizon line. The result is so:[!Successful floor and ceiling projectionenter image description here](https://i.sstatic.net/3GYH8vnl.png)
Excellent. The only issue now I belive has to do with rounding and how the walls scale the closer and further away from them you are. In anycase I feel I've sufficently answered my original question and if I do get stumped on this new issue it seems it would be better to just create a new post.
It's a valid approach, but the most significant concern is the cold start time, as AWS needs to have longer initialisation times to set up the execution environment, load code, and initialise the Spring application context etc.
That said, AWS provides Lambda SnapStart, which can significantly reduce cold start times by creating a snapshot of the initialised execution environment. So, if, after weighing the trade-offs, you decide to use Spring with Lambda, give SnapStart a try.
https://docs.google.com/spreadsheets/d/{spreadsheet_id}/export?format=csv#
Putting the #
at the end should work
Upvote! Please consider this for DevSecOps users
Live Variables are available in VSCode but disabled by default in the cortex_debug extension. To enable add following snippet to your configuration:
"liveWatch": {
"enabled": true,
"samplesPerSecond": 20
},
See Issue in STM32 extension.
For additional information please also have a look at cortex_debug debug attirbutes.
In short, DECLARE statements must be at the beginning of a block. This can be the beginning of the very first block (for a global variable) or at the beginning of any embedded block (for a local variable). A DECLARE statement cannot follow other executable code in the same block.
The solution for me was pretty simple:
I had to delete old php Version
apt purge php7.4
After that no more errors will be shown.
I have tried it out and it worked with these steps:
Log into GitHub Copilot from a non-admin VSCode window
Once the login process is complete, open VSCode as admin
Let me know if this worked for you!
I have been occasionally getting the same type of error, when timer executed integration jobs run in OpenText BizLink / BizManager...
A couple of differences I'm seeing between the OP's issue and mine is that the connection settings never change. ever.
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: null; Proxy: xXxXxXx.xXx.XxXxX.com), S3 Extended Request ID: null]. After executing [0] of [0] retries with delay of [0] second(s).
We may get this error at 6AM during a file push process, it fails, and when I re-execute the push it is successful.
Due to the intermittency I started to expect this was some type of internet connectivity issue.
And there's the oddity that there were no retries, though I have specified 2 retries with a 600 second delay. So it's like it's bypassing the additional retries because perhaps it's never reaching AWS from our HTTP Proxy host which handles the connectivity?
the version used in 2025 :
import geopandas as gpd
import geodatasets
world_path = geodatasets.get_path('naturalearth.land')
world = gpd.read_file(world_path)
world
no longer has the pop_est
column. Please provide some recommendations to make it look similar to the old one:
gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
In my case that error meant that the vagrant I'm using (on Debian bookworm) isn't compatible with the VirtualBox I'm using (virtualbox-7.1). Installing (acually downgrading to) virtualbox-7.0 fixed the problem.
Check the free memory for this min. 2GB memory requier.
you can easily use sweep type to make that.
<shape xmlns:android="http://schemas.android.com/apk/res/android">
<gradient android:startColor="@color/green" android:endColor="@color/BackgroundColor"
android:angle="90"
android:centerX="-5"
android:centerY="0.5"
android:type="sweep"
/>
</shape>
I finally solved the problem. In short, it was the OpenBSD system resource limits that caused the issue. Obviously, nvim as an IDE is very resource-hungry. So, I had to tweak the staff class /etc/login.conf:
staff:\
:datasize-cur=2G:\
:datasize-max=unlimited:\
:stacksize=8M:\
:openfiles=8192:\
:maxproc-max=2048:\
:maxproc-cur=2048:\
:memorylocked=131072:\
:memoryuse=unlimited:\
:ignorenologin:\
:requirehome@:\
:tc=default:
and also create /etc/sysctl.conf with the following line:
kern.maxfiles=65536
Now everything works like a charm with the default settings I use also on other OSs such as Linux and DragonFlyBSD.
Special thanks to everybody and to Volker from the OpenBSD misc list, with whom I corresponded also in private. Special thanks also to ChatGPT and grok :)
Now that this is solved, I would be happy if you can comment on my resource limits. Maybe some settings can be further improved? This is my current default class:
default:\
:path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin
/usr/local/sbin:\
:umask=022:\
:datasize=unlimited:\
:maxproc-max=256:\
:maxproc-cur=128:\
:openfiles=8192:\
:stacksize-cur=4M:\
:localcipher=blowfish,a:\
:tc=auth-defaults:\
:tc=auth-ftp-defaults:
I think I tweaked some things here as well.
I am looking forward to your comments.
Best regards,
Martin
I just had this occur but wasn't using the 'Quick Find' feature.
Closing and reopening Visual Studio fixed this for me.
Using version:
Microsoft Visual Studio Professional 2022 (64-bit) - Preview
Version 17.14.0 Preview 2.0
Did you ever succeed in getting this to run?
Im using dbt 1.8.9 sql server 1.8.4
Snapshot table and model are in different databases on the same server, running with no snapshot table existing works.
Running as a db user with db_owner access works, even after the table is already created.
Delete the table, run as a service account, it succeeds.
Run again as the service account and sql server returns a permissions error (table already exists error.)
Cannot figure out what sql server permission the service account is missing.
Answer above helped me with .NET MAUI mobile application. I was getting empty token #_=_ within some Android browsers (Chrome in my case). After adding query parameter below it helped to receive authentication token. Thank you!
&session_mode=token
Попробуйте чистый SQL с помощью nativeQuery
With regards to the first half of the question (why the "arithmetic mean" -- the average -- is not used):
tf.fit
uses the output of those tf.keras.losses.*
functions for tf.keras.optimizers.*
, which use the original vector (the whole set of values).
Thus, the name MeanSquaredError
refers to the training formula as a whole (which is for regression loops, as opposed to categorical_crossentropy
, which is for classification loops), not the literal result of this specific step of the outer tf.fit
loop.
For the second half of the question, will defer to https://stackoverflow.com/a/70296585/24473928
A solution that I found to be able to execute IE11 (and that Microsoft Edge is not opened) was to add these 2 registration keys. It does not affect the use of EDGE, which allows the Explorer Internet icon to run, open IE and not Microsoft Edge. The keys are as follows:
reg add "hkcu \ software \ Microsoft \ Internet Explorer \ Main \ featurecontrol \ feature_bypass_sbe_fdd5e421" /v "iexplore.exe" /t reg_dword /d 1 /f
reg add "hkcu \ software \ Microsoft \ Internet Explorer \ Main \ featurecontrol \ feature_bypass_sbe_08a97d41b2f8" /v "iexplore.exe" /t reg_dword /d 1 /f
very misleading title. it should be "how to align dates over multiple securities". I am actually looking to draw data of multiple securities and output matrix with columns respective to the input list of securities.
You can check out Scanbot SDK, they support MaxiCodes.
to replace a full line ...
cat file.txt | sed '<line number>,<line number>s/.*/<new line content>'
Turns out I was navigating to another page immediately and popping this fragment from back stack. But the fragment wasn't immediately getting destroyed. Hence the exception was thrown if I accessed the viewmodel in that state. So to resolve it I simply did not use the viewmodel if I am to navigate to another place.
To solve this problem, you just need to use this dependency:
https://github.com/kadiraydinli/react-native-system-navigation-bar
You use it as follows:
import SystemNavigationBar from 'react-native-system-navigation-bar';
export default function App() {
useEffect(() => {
SystemNavigationBar.setFitsSystemWindows(false);
SystemNavigationBar.setNavigationColor(#COLOR);
}, []);
return (
//YOUR CODE
)
}
Here's a function that will work with any range:
function multiplier($min,$max,$costant=100){
return mt_rand($min*$costant,$max*$costant)/$costant;
}
echo multiplier(0,1,10000);
You can adjust precision increasing/decreasing the constant.
I looked at this, 15 years later.
Getting an excel sheet to AS400 does not seem straightforward. Here what I did:
copy the excel sheet to a text file or generate an SCV
upload the text file to AS400 IFS under /home/user/file
create a source file CRTSRCPF lib/file1
CPYFRMSTMF /home/user/file lib/file1
Use SQL scripts to copy from lib/file1 to a data base file
Looks like the Spring Security team are going to add a new annotation @AuthorizeRequestMapping
in Spring Security 7 to allow adding this security in controllers that will participate in the authorizeHttpRequests()
part of the security chain:
https://github.com/spring-projects/spring-security/issues/16250
"php.executablePath": "C:/xampp82/php/php.exe"
First you need to start ScreenCast using XDG Desktop Portal interfaces.
This will give you PipeWire streams that you can use as target_id in pw_stream_connect.
How about routing to dashboards using packages?
httpYac supports partially Rest Client Variables
https://httpyac.github.io/guide/variables.html#rest-client-dynamic-variables
$datetime rfc1123|iso8601|"custom format"|'custom format' [offset option]
GET /anything?q={{$datetime iso8601}}
Did you ever solve this issue? I'm struggling with the same problem right now. :)
Some observations:
t = 0:1/Fpoints:d; % this results in an array 1x101
While:
w1 = hann(16384); % results in an array 16384x1
Multiplying them together with *
results in a matrix which is 16384x101. So here there is some room for improvement for sure.
I cannot really tell however what you actually want from your code currently, but let's assume the following:
sin(2 * pi * 5e7 * t)
, where t
goes from 0 to N-1/Fpoints
Here's a demo for now, please tell me if this is not what you wanted to do?
N = 16384;
Fpoints = 1e9;
t = (0:N-1)/Fpoints;
w1 = hann(N)';
signal = sin(2 * pi * 5e7 * t);
windowedSignal = w1 .* signal; % w1 is a column, signal is row
figure(1);
subplot(3,1,1)
plot(t, signal)
ylabel("Signal")
xlim([0, t(end)])
subplot(3,1,2)
plot(t, w1)
xlim([0, t(end)])
ylabel("Window")
subplot(3,1,3)
plot(t, windowedSignal)
xlim([0, t(end)])
ylabel("Multiplied")
After a few tries I found this blog post, the configuration it suggests did not work on my project, however it had insights about the location of node_modules
(it is a couple of folders above what it "should be", i.e. an extra ../../
) .
After some modifications I got it working:
packages/<package_name>/android/settings.gradle
- add extra ../../
:
pluginManagement { includeBuild("../../../node_modules/@react-native/gradle-plugin") }
...
includeBuild('../../../node_modules/@react-native/gradle-plugin')
packages/<package_name>/android/app/build.gradle
- modify the default locations and add extra ../../
:
...
reactNativeDir = file("../../../../node_modules/react-native")
...
codegenDir = file("../../../../node_modules/@react-native/codegen")
...
I found a solution i dont know why it works now, but it works i guess Adding this to my RestClient
.requestInterceptor(new ClientHttpRequestInterceptor() {
@Override
public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
log.info(request.toString());
return execution.execute(request, body);
}
})
Somehow works also it removed the Transfer-Encoding header
In my case, the problem was solved by starting Fiddler first and then the program.
Upgrading to [email protected]
resolved the problem.
I had to change the css imports:
from
import 'react-datepicker/dist/react-datepicker.css';
to
import 'node_modules/react-datepicker/dist/react-datepicker.css';
I`m getting a fatal error..
any idea why?
add_filter('woocommerce_email_recipient_customer_processing_order', 'email_recipient_custom_notification', 10, 2);
function email_recipient_custom_notification( $recipient, $order ) {
if ( ! is_a( $order, 'WC_Order' ) ) return $recipient;
// Set HERE your replacement recipient email(s)… (If multiple, separate them by a coma)
$recipient .= ', [email protected]';
return $recipient;
}
add_action('woocommerce_order_status_changed', 'send_custom_email_notifications', 10, 4 );
function send_custom_email_notifications( $order_id, $old_status, $new_status, $order ){
$wc_emails = WC()->mailer()->get_emails(); // Get all WC_emails objects instances
if ($new_status === 'ticket-ontime') {
$wc_emails['WC_Email_Customer_Processing_Order']->trigger( $order_id );
}
}
For anyone looking, this jobrunner does exactly that: https://github.com/sivann/jobrunner, bridge the gap between HTTP requests and synchronous CLI command execution with managed concurrency and basic monitoring.
Found this article (in Japanese, but my Firefox has a good embed translate feature) with some background on this issue: https://github.com/koji-1009/zenn_article/blob/main/articles/fb612faf335fe3.md
I have the same problem, don't know what I'm doing wrong. Send help.
<button id="print-btn">Print</button>
let ajaxDone = false;
$(function () {
$.ajax({
type: "get",
url: "/evaluation/silver/userinfo",
dataType: 'json',
success: function (data) {
const parsed = typeof data === "string" ? JSON.parse(data) : data;
// update DOM
$("#user_name").text(parsed.name);
// other updates...
ajaxDone = true;
}
});
$('#print-btn').on('click', function () {
if (!ajaxDone) {
alert("Please wait, loading data...");
} else {
setTimeout(() => window.print(), 200);
}
});
});
Dude, thanks you pointed me in the right direction:
I had some mermaid syntax error in my .md that made some errors, but i figured it out. In my renderes this was ok, but not during export.
Correct format:
```mermaid
NOT correct
``` mermaid
I made some additional changes for anybody else finding this. A batch converter with some debugging, and the actual converter also with some debugging.
File: markdown_converter_stackoverflow.bat
@echo off
setlocal enabledelayedexpansion
REM ============================================
REM CONFIGURATION VARIABLES - EDIT THESE
REM ============================================
set "AUTO_RUN=true"
set "AUTO_CLOSE=true"
REM AUTO_RUN: Set to "true" to skip all pause statements, "false" for step-by-step
REM AUTO_CLOSE: Set to "true" to close automatically at end, "false" to wait for keypress
REM ============================================
REM SCRIPT START
REM ============================================
echo Starting conversion process...
echo Input file: "%~1"
if /i "%AUTO_RUN%"=="false" pause
REM Check if a file path is provided
if "%~1"=="" (
echo Please provide a Markdown file path as an argument.
if /i "%AUTO_CLOSE%"=="false" pause
exit /b 1
)
REM Get the full path, filename, and directory of the input file
set "fullpath=%~f1"
set "filename=%~n1"
set "filedir=%~dp1"
set "scriptdir=%~dp0"
echo File details:
echo - Full path: %fullpath%
echo - Filename: %filename%
echo - Directory: %filedir%
if /i "%AUTO_RUN%"=="false" pause
REM Create (or recreate) a temporary folder
set "tempfolder=%filedir%%filename%"
if exist "%tempfolder%" rmdir /s /q "%tempfolder%"
mkdir "%tempfolder%"
echo Created temp folder: %tempfolder%
REM Copy images folder if it exists
if exist "%filedir%Figurer" (
echo Copying Figurer folder to temp directory...
xcopy "%filedir%Figurer" "%tempfolder%\Figurer\" /E /I /Y
if %errorlevel% equ 0 (
echo Images copied successfully
) else (
echo Warning: Failed to copy images
)
) else (
echo No Figurer folder found in: %filedir%
)
if /i "%AUTO_RUN%"=="false" pause
REM Run Mermaid-cli to preprocess the Markdown file
echo Processing Markdown file with Mermaid...
call mmdc -i "%fullpath%" --outputFormat=pdf --pdfFit -o "%tempfolder%\%filename%.md"
if %errorlevel% neq 0 (
echo Error: Mermaid-cli failed with exit code %errorlevel%
if /i "%AUTO_CLOSE%"=="false" pause
goto :cleanup
)
echo Mermaid-cli processing complete.
if /i "%AUTO_RUN%"=="false" pause
REM Change to the temporary directory
echo Changing to temporary directory...
pushd "%tempfolder%"
if %errorlevel% neq 0 (
echo Error: Failed to change to temporary directory
if /i "%AUTO_CLOSE%"=="false" pause
goto :cleanup
)
echo Changed to temporary directory successfully.
REM Show what's in the temp directory
echo Contents of temp directory:
dir /b
echo.
if /i "%AUTO_RUN%"=="false" pause
REM Convert the processed Markdown to PDF using Pandoc
echo Converting to PDF...
call pandoc "%filename%.md" -f markdown-implicit_figures -o "..\%filename%.pdf"
if %errorlevel% neq 0 (
echo Error: Pandoc failed with exit code %errorlevel%
if /i "%AUTO_CLOSE%"=="false" pause
popd
goto :cleanup
)
echo PDF conversion complete.
if /i "%AUTO_RUN%"=="false" pause
REM Change back to the original directory
echo Changing back to original directory...
popd
echo Changed back to original directory.
:cleanup
REM Clean up the temporary folder
echo Cleaning up...
@REM rmdir /s /q "%tempfolder%"
if exist "%filedir%%filename%.pdf" (
echo Conversion complete. Output file: %filedir%%filename%.pdf
) else (
echo Error: PDF file was not created.
)
REM Final pause based on AUTO_CLOSE setting
if /i "%AUTO_CLOSE%"=="false" (
echo Press any key to exit...
pause >nul
)
file: markdown__batch_conversion_script.bat
@echo off
REM ============================================
REM CONFIGURATION VARIABLES - EDIT THESE
REM ============================================
set "AUTO_RUN=true"
set "AUTO_CLOSE=false"
REM AUTO_RUN: Set to "true" to skip all pause statements, "false" for step-by-step
REM AUTO_CLOSE: Set to "true" to close automatically at end, "false" to wait for keypress
REM ============================================
REM SCRIPT START
REM ============================================
echo ========================================
echo Acts Academy Document Conversion
echo ========================================
echo Configuration: AUTO_RUN=%AUTO_RUN%, AUTO_CLOSE=%AUTO_CLOSE%
echo.
REM Use the working StackOverflow converter
set "CONVERTER=%~dp0markdown_converter_stackoverflow.bat"
echo Converting first document...
call "%CONVERTER%" "Markdown1.md"
echo.
echo Converting second document...
call "%CONVERTER%" "Markdown2.md"
echo.
echo ========================================
echo Conversion completed!
echo ========================================
REM Show what files were created
echo Generated PDF files:
dir /b *.pdf 2>nul || echo No PDF files found
echo.
REM Final pause based on AUTO_CLOSE setting
if /i "%AUTO_CLOSE%"=="false" (
echo Press any key to exit...
pause >nul
) else (
echo Auto-closing in 3 seconds...
timeout /t 3 >nul
)
Fix files names, and place/run bat file in same directory.
After having posted a bug with the original developer, they got back to me about why this is happening. A more detailed explanation could be found here: https://github.com/dnsjava/dnsjava/issues/378
The short answer is: Instead of getParentItem()
you should call getParent()
on the item in order to get the SQ element that contains this item, because a DcmSequenceOfItems
is not a DcmItem
. However, DcmDataset
is derived from DcmItem
, that's why you get the top-level DICOM data set when you call getParentItem()
on an item of a top-level SQ element. All this information should also be available from the API documentation (if not, please let me know and I'll improve it).
In principle, the DcmPathProcessor
could do what you want, but it is currently rather limited in terms of functionality and only supports the other direction, i.e. finds an item or element (or even multiple) from a given path.
the asset folder should be one level higher, not in your lib folder
This is supported as of OpenSearch v3.0.0.
https://docs.opensearch.org/docs/latest/query-dsl/compound/hybrid/
Try installing a specific version of omegaconf that satifies the requirements
pip install omegaconf==2.0.9
As long as it keeps within the boundaries of 2.0.5 and 2.1 you should be fine, i only specified 2.0.9 as it is the newest version within in the boundaries.
You need to use:
FDQuery1.FetchOptions.Unidirectional := True;
Before the open statement. You can lose the:
FDQuery1.FetchOptions.Mode := fmAll;
When saving a report that contains a URL action to PowerPoint, the action is lost if there is no value within the textbox. If there is a value within the textbox the URL action is then attributed to the value and the URL can be accessed with a Ctrl+Click command, on the text value not the text box.
The above screenshot shows a textbox with the word Test included in the textbox, this word has the URL action, if this word is removed when the report is exported to PowerPoint the URL action is not included.
Figured the issue.
import * as http from 'http';
So you know how to fix it? I think GPT tell you about: pages_manage_events
.sample {
width: 100%;
}
.sample td:nth-child(1),
.sample th:nth-child(1) {
width: 30%;
word-break: break-word;
}
.sample td:nth-child(2),
.sample th:nth-child(2) {
width: 70%;
}
Delete the table layout, add
word-break: break-word
to the left.
The best platform for a Glassdoor-like website is a modular, cloud-native architecture.
This involves a React/Vue frontend for dynamic user interfaces, a Node.js/Go microservices backend for scalable and independent functionalities, and a robust data core combining PostgreSQL (for structured data), Elasticsearch (for fast search), and potentially a Graph database (for complex relationships).
All this is hosted on cloud services (AWS/GCP/Azure) for scalability and managed resources, integrating AI/ML (Python) for advanced analytics like sentiment analysis and personalized recommendations.
https://github.com/emkeyen/postman-to-jmx
This Python3 script converts your Postman API collections into JMeter test plans. It handles:
Request bodies: Raw JSON, x-www-form-urlencoded.
Headers: All your custom headers.
URL details: Host, path, protocol and port.
Variables: Both collection-level vars and env vars from Postman will be added as "User Defined Variables" in JMeter, so you can easily manage dynamic values.
[Sql.Expression(
@"TRANSLATE({0}, 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!@#$%^&*()_+-=[]{}|;:''"",.<>/?`~ ', '')",
ServerSideOnly = true,
InlineParameters = true)]
public static string ExtractDigits(this string input)
=> throw new NotImplementedException();
The above way gave me error, that the input string has not right format.
But I managed to make it work like this:
public static class SqlFunctions
{
[Sql.Expression(
"ISNULL((" +
"SELECT STUFF((" +
" SELECT '' + SUBSTRING(ISNULL({0}, ''), number, 1) " +
" FROM master..spt_values " +
" WHERE type = 'P' " +
" AND number BETWEEN 1 AND LEN(ISNULL({0}, '')) " +
" AND SUBSTRING(ISNULL({0}, ''), number, 1) LIKE '[0-9]' " +
" FOR XML PATH('')), 1, 0, '')" +
"), '')",
PreferServerSide = true,
ServerSideOnly = true
)]
public static string ExtractNumber(string input) => throw new NotImplementedException();
}
And called this method in my query
public void InsertPayoffData()
{
using var db = _db();
var query = db.Payoff1C
.Join(db.Debit,
p => new { InvoiceNumber = SqlFunctions.ExtractNumber(p.InvoiceNumber), p.InvoiceDate },
d => new { InvoiceNumber = d.InvoiceNumber.ToString(), d.InvoiceDate },
(p, d) => new { Payoff = p, Debit = d })
.Join(db.Kredit,
pd => new { pd.Payoff.PayDocNumber, pd.Payoff.PayDocDate },
k => new { k.PayDocNumber, k.PayDocDate },
(pd, k) => new { pd.Payoff, pd.Debit, Kredit = k })
.Where(joined => !db.Payoff.Any(pf =>
pf.Debit_ID == joined.Debit.DebitId &&
pf.Kredit_ID == joined.Kredit.Kredit_ID))
.Select(joined => new Payoff
{
Debit_ID = joined.Debit.DebitId,
Kredit_ID = joined.Kredit.Kredit_ID,
PayoffDate = joined.Payoff.PayOffDate,
PayoffSum = joined.Payoff.PayOffSum,
PayoffType = 0
});
var result = query.ToList();
db.BulkInsert(result);
}
Thanks everyone for your help!
You need to install the kernel so that it can be found:
uv run python -m ipykernel install --user --name "$NAME"
Similar to How to install a new Jupyter Kernel from script or Installing a ipykernel and running jupyter notebook inside a virtual env - not using conda
const id = (await params).id
use like this one
Unfortunately, you cannot directly stream the camera feed using both ARFoundation and Agora. A workaround for this is to save the camera output into a `RenderTexture` inside Unity and use that. You can then feed this into Agora. Just make sure to disable the camera capture of Agora if that is active, and call this method in your `Update()` function:
Texture2D frameTexture;
void CaptureFrame()
{
RenderTexture rt = new RenderTexture(Screen.width, Screen.height, 24);
ScreenCapture.CaptureScreenshotIntoRenderTexture(rt);
RenderTexture.active = rt;
frameTexture = new Texture2D(rt.width, rt.height, TextureFormat.RGBA32, false);
frameTexture.ReadPixels(new Rect(0, 0, rt.width, rt.height), 0, 0);
frameTexture.Apply();
RenderTexture.active = null;
rt.Release();
}
Here is the documentation from Agora's side on how to achieve this:
- Agora's Github for Unity: https://github.com/AgoraIO-Extensions/Agora-Unity-Quickstart
- Custom Video Source Docs ( using Agora's native SDK ) : https://docs.agora.io/en/video-calling/overview/product-overview
For me it dint work I don't have any lock file in it
I do check other answer and updated my locale variables as well but still same issue.
here is my locale :
LANG="en_IN.UTF-8"
LC_COLLATE="en_IN.UTF-8"
LC_CTYPE="en_IN.UTF-8"
LC_MESSAGES="en_IN.UTF-8"
LC_MONETARY="en_IN.UTF-8"
LC_NUMERIC="en_IN.UTF-8"
LC_TIME="en_IN.UTF-8"
LC_ALL=
With Dart 3 there's now a new & more convenient way.
You can now just do:
await (future0, future1).wait;
Or if you need the results you directly can unpack them like this:
final (result0, result1) = await (future0, future1).wait;
this is a fairly common mistake I have made some time ago. Transitions will only work if you actually set a height. Try to set the height to 0px an dthen on click of a button to 100px, the transition should work. If you do something like height 0px to height auto, it will not work.
If you need a dynamic height, best way is to get Element inside the height cointainer and extract height dynamically.
Good luck!
Firstly , just go and open the currentFile.java , do a left click on currentFile.java , there you are having an option as source action select that and select the all the methods for running and then go to the OK option. Ultimately a test case file for your currentFile.java will be produced in the same window as currentFileTest.java.
My solution was to find that when I ran npm i, Microsoft Defender popped up Trojan:Win32/Kepavll!rfn, which seriously caused ngrok.exe not to be generated in node_modules@expo\ngrok-bin-win32-x64. After allowing it, my npx expo start --tunnel returned to normal. The problem occurred when I upgraded sdk52 to sdk53. I hope this method works for you.
How about the following (demo here, the mic won't work using the Run Code Snippet button below)
flag = 1; // flag to not multiply events
window.addEventListener("click", function () {
if (!flag) return; // if event is on, exit
flag = !flag;
document.getElementById("text").style.visibility = "hidden";
startListening(document.getElementById("micCircle"));
});
function startListening(e) {
var audioContext = window.AudioContext || window.webkitAudioContext;
var analyserNode, frequencyData = new Uint8Array(128);
if (audioContext) {
var audioAPI = new audioContext(); // Web Audio API is available.
} else {
/* ERROR HANDLING */
}
function animateStuff() {
requestAnimationFrame(animateStuff);
analyserNode.getByteFrequencyData(frequencyData);
var rang = Math.floor(frequencyData.length /2); // find equal distance in haystack
var FREQ = frequencyData[rang] / 255;
e.style.opacity = FREQ + 0.1
}
function createAnalyserNode(audioSource) {
analyserNode = audioAPI.createAnalyser();
analyserNode.fftSize = 2048;
audioSource.connect(analyserNode);
}
var gotStream = function(stream) {
// Create an audio input from the stream.
var audioSource = audioAPI.createMediaStreamSource(stream);
createAnalyserNode(audioSource);
animateStuff();
};
setTimeout(function(){ console.log( frequencyData )}, 5000 );
// pipe in analysing to getUserMedia
navigator.mediaDevices
.getUserMedia({ audio: true, video: false })
.then(gotStream);
}
html,
body {
display: flex;
align-items: center;
justify-content: center;
height: 100%;
}
.micContainer {
font-size: 40px;
color: #dd3333;
display: flex;
align-items: center;
justify-content: center;
}
#text {
position: absolute;
top: 0px;
}
#icon {
z-index: 20;
}
#micCircle {
opacity: 0;
position: fixed;
z-index: 10;
}
#micCircle {
position: fixed;
}
#micCircle1 {
background: #dd3333;
width: 70px;
height: 70px;
border-radius: 50%;
opacity: 0.5;
position: absolute;
top: -35px;
left: -35px;
z-index: 10;
}
#micCircle2 {
background: transparent;
width: 80px;
height: 80px;
border-radius: 50%;
border: 4px solid #dd3333;
position: absolute;
top: -44px;
left: -44px;
z-index: 10;
}
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.0/css/all.min.css" rel="stylesheet"/>
<h3 id="text">click anywhere to start listening</h3>
<div class="micContainer">
<div id="micCircle">
<div id="micCircle1"></div>
<div id="micCircle2"></div>
</div>
<i id="icon" class="fa-solid fa-microphone"></i>
</div>
CSS
I found scroll-margin-top usefull for this issue.
so using it like:
.some-class {
scroll-margin-top: 4em;
}
hx-trigger="keydown[key=='Enter']"
Looks like you loaded a package that overwrites `mlr3::resample()`. Restart your R session.
File > Settings >Languages & Frameworks >Android SDK > SDK Tools> check Android Command line SDK Tools.
I hope the provided Stackblitz solution works for your use case. The fix involves adding the cdkDragHandle
directive to the <h2>
element, enabling drag functionality specifically on the header.
You could use external Holiday API to retrieve holidays in a RESTful manner.
List of endpoints:
GET /v1/holidays/{countryCode}/upcoming
GET /v1/holidays/{countryCode}/search/date/{holidayDate}
GET /v1/holidays/{countryCode}/search/year/{year}
GET /v1/holidays/search/date/{holidayDate}
Example of response:
[
{
"date": "2025-06-19",
"name": "Juneteenth National Independence Day",
"localName": "Juneteenth National Independence Day",
"nationwide": true,
"country": {
"name": "United States of America",
"localName": "United States",
"alpha2Code": "US",
"alpha3Code": "USA",
"numericCode": "840"
},
"subdivisions": [],
"types": [
"Public"
]
},
{
"date": "2025-07-04",
"name": "Independence Day",
"localName": "Independence Day",
"nationwide": true,
"country": {
"name": "United States of America",
"localName": "United States",
"alpha2Code": "US",
"alpha3Code": "USA",
"numericCode": "840"
},
"subdivisions": [],
"types": [
"Public"
]
}
]
To be able to use their API you will need to generate API key on their dashboard and subscribe for product with free trial (risk-free 3-day trial, then $110 per year or $12 per month).
Link to their API reference: https://api.finturest.com/docs/#tag/holiday
This API has moved to https://fipe.parallelum.com.br/api/v2/references has documented here: https://deividfortuna.github.io/fipe/v2/#tag/Fipe
The correct format of the repository URL goes like this: https://github.com/myuser/myrepo
. Unless I'm missing anything, GitHub can't be hosted locally, so it's expected that you use the URL with https://github.com
.
Perhaps you mean GitLab rather than GitHub? In that case, you'll need to use the GitLab option in the VCS configuration setup.
I am also experiencing this same issue. The content fetches on local but not on the live site - sever rendering. To test, I built a separate client side page which fetches and renders sanity content on the client side, surprisingly that seems to work.
You can pass data back to the previous page by passing the data to the Navigator.pop
function.
ScreenB.dart
Navigator.pop(context, yourStringData);
Catch it like follows
ScreenA.dart
final result = await Navigator.push(
context,
MaterialPageRoute(builder: (_) => ScreenB()),
);
to replace all = with tab
awk -vRS="\=" -vORS="\t" '1' mytest.txt > mytest_out.txt
You can achieve this using the Mark
Feature of Notepad++ combined with Regex.
First of all, ((?:mcc|mnc): \d.*)
will match you all values of mcc and mnc with the following digets.
You can then use Mark feature with Regex to mark all matching rows in your Log.
Afterwards go to Search
, Bookmark
, Remove Unmarked Lines
Result:
After you've done this you can save the result in another file.
I had a similar error while trying to run test scenario from U-Boot's .gitlab-ci.yml. It turned out that binman requires the following buildman action:
./tools/buildman/buildman -T0 -o ${UBOOT_TRAVIS_BUILD_DIR} -w --board sandbox_spl;
Then binman stops complaining about missing QUIET_NOTFOUND.
My solution is to press Enter, which cancels the popup.
import pyautogui
pyautogui.press('enter')
Here's a demo impelmentation with python/sqlite, which allows for multiple types of events (e.g. based on remote IP): https://github.com/sivann/simpleban. As it uses a DB index complexity should be O(LogN).
As a supplement to the correct and helpful answer by Sweeper this answer digs a bit deeper. You asked why your parsing threw an exception, and Sweeper already correctly said that it’s because yyyy
denotes a variable-width field. I want to show you the two places in the documentation where this is specified. Since for example MM
gives a fixed-width field of exactly two digits, one can easily get surprised when neither yyyy
nor uuuu
gives a fixed-width 4-digit field.
The documentation of DateTimeFormatterBuilder.appendPattern()
first refers to DateTimeFormatter for a user-focused description of the patterns. It in turn says specifically about years:
The count of letters determines the minimum field width below which padding is used. … If the count of letters is less than four … Otherwise, the sign is output if the pad width is exceeded, as per
SignStyle.EXCEEDS_PAD
.
So this allows yyyy
to print, and as a consequence also parse a year with either 4 digits or more than four digits with a sign.
The documentation of DateTimeFormatterBuilder.appendPattern()
goes on to specify that appending a pattern of four or more letters y
is equivalent to appendValue(ChronoField.YEAR_OF_ERA, n, 19, SignStyle.EXCEEDS_PAD)
where n
is the count of letters. We see that yyyy
allows a field of width 4 through 19.
Links
You could try to use Finturest - Holiday API, it costs 110$ per year. It supports 115 countries and 6 holiday types.
Endpoints:
GET /v1/holidays/{countryCode}/upcoming
GET /v1/holidays/{countryCode}/search/date/{holidayDate}
GET /v1/holidays/{countryCode}/search/year/{year}
GET /v1/holidays/search/date/{holidayDate}
Example of response:
[
{
"date": "2025-06-08",
"name": "Pentecost",
"localName": "Zielone Świątki",
"nationwide": true,
"country": {
"name": "Poland",
"alpha2Code": "PL",
"alpha3Code": "POL",
"numericCode": "616"
},
"subdivisions": [],
"types": [
"Public"
]
}
]
Links:
But How does using multiprocessing.Process solve this issue? @Kemp
have you checked for loguru logs are saved in another folder? In a similar set up (NSSM + python + loguru), I notice that loguru logs are saved in base_folder\venv\Script
, while NSSM stdout
and stderr
are saved in base_folder
.
Is there an option for detecting the latter case?
No.
Please don't judge me for my code. I'm new here :)
I had the same Problem today. I found a simple Solution for the problem. I hope it helps someone in the future.
My Workaround is to get the TextBox out of the Editable ComboBox and set the Binding via C# Code on the TextProperty of the extractet TextBox.
You have to add a Loaded Event in the XAML Code of the ComboBox:
<ComboBox x:Name="coboMyTestBox"
IsEditable="True"
Loaded="coboMyTestBox_Loaded"/>
This doesnt work with the Initialized Event, because the editable TextBox is not initialized at this moment. You need the Loaded Event!
Now extract the TextBox in your xaml.cs like that (the myDataPreset is my MVVM Object where i store the Data -> Example is further down)
private void coboMyTestBox_Loaded(object sender, EventArgs e)
{
//Extract the TextBox
var textBox = VisualTreeHelperExtensions.GetVisualChild<TextBox>((ComboBox)sender);
//Check if TextBox is found and if MVVM Object is Initialized
if (textBox != null && myDataPreset != null)
{
Binding binding = new Binding("MyStringVariable");
binding.Mode = BindingMode.TwoWay;
binding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged;
BindingOperations.SetBinding(textBox, TextBox.TextProperty, binding);
}
}
Here is my class and function to extract the TextBox:
public static class VisualTreeHelperExtensions
{
public static T? GetVisualChild<T>(DependencyObject depObj) where T : DependencyObject
{
if (depObj == null) return null;
for (int i = 0; i < VisualTreeHelper.GetChildrenCount(depObj); i++)
{
var child = VisualTreeHelper.GetChild(depObj, i);
var result = (child as T) ?? GetVisualChild<T>(child);
if (result != null) return result;
}
return null;
}
}
And here an example of my MVVM:
public class _MyDataPreset : INotifyPropertyChanged
{
//Private Definition
private string myStringVariable= "";
//Public Accessor - this is what the Binding calls
public string MyStringVariable
{
get { return myStringVariable; }
set
{
//You can modify the value here like filtering symbols with regex etc.
myStringVariable= value;
OnPropertyChanged(nameof(MyStringVariable));
}
//PropertyChanged Event Hanlder
public event PropertyChangedEventHandler? PropertyChanged;
private void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
I always initialize my MVVM Preset after the InitializeComponent() Event at the beginning like this or later if i need it at another point where i have to choose between multiple Templates:
public partial class MyUserControl : UserControl
{
private _MyDataPreset? myDataPreset;
public MyUserControl()
{
InitializeComponent();
myDataPreset = new _MyDataPreset();
}
//Here comes the coboMyTestBox_Loaded Event
}
Note:
The Extract Function also works great with Objects like the DatePicker to access the TextBox in the background.
But now it is much better to use WEBP images instead of PNG.
Less size and almost the same quality.
In that case parameter isCrunchPngs can be skipped.
A bit late, label1 contains a section clause while label2, label3, and label4 do not. This means that if one performs label1 then label2, label3, and label4 will also be executed. Even if label1 exits early label2, label3, and label4 will still be performed.
Some mainframe shops prohibit the use of sections because because of this follow through effect if a section header is missing in a following paragraph(s). In your example, if you don't want to execute label2 when label1 is exited early then code an exit-section rather than an exit-paragraph.
Again in your example it is better to have label1 perform label2 than perform label2 as part of a section.
Also, "perform thru "is an old constrict which was needed if a go to exit statement was coded. Newer versions of COBOL have the exit-paragraph (and exit-section) directive which will terminate a performed paragraph or section early thereby eliminating the need for a perform through, a go to statement and an exit paragraph completely.
it is better to code multiple performs (perform label1 followed by perform lanel2) than perform label1 through label3 since it is easier to see "upfront" what will be performed rather than looking at the performed paragraphs to see what is being performed and if any other paragraphs exist between label1 and label2.
If the individual paragraphs were coded as label2, label2, and label3 then a perform label1 through label3 would also result in label2 being performed.
Bottom line, don't use sections, go to, and exit paragraphs but explicitly code only the paragraphs which are desired.
FWIW, I still code an exit paragraph containing only an exit statement after each paragraph to serve only as a STOP sign to any PERSON reading the code and remind them that each paragraph is a "stand alone" entity with a single entry and exit point and no following paragraphs will be executed.
syntax issue :- its ReactDOM not REACTDOM.
I noticed the same. (VS 17.13.7) I also noticed that if I stop the debugging by closing the browser window that was opened when debugging started then it does not close other browser windows. If I stop debugging from Visual Studio's UI then all browser windows are closed (supposing I haven't used Alex's workaround).
It is located in
/var/lib/postgresql/data/pg_hba.conf
To best setting up, just make a volume to local file, copy content from docker pg_hba.conf and edit local. Default pg_hba.conf file can be found in official docs: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html
If you want to change a host, not forgot to apply env in docker:
ENV POSTGRES_HOST_AUTH_METHOD=trust
also find commands find that file, are you sure that use that in correct location? (root: /)
root@test:/# find . -name "pg_hba.conf"
./var/lib/postgresql/data/pg_hba.conf
This solution saved my life. However, if you try to clear all slides from the destination PowerPoint after cloning them from the source, an error alert will still appear. To avoid this, you should copy the slides first and then delete the ones you don't need at the end of the process.
I also ran into this problem.
The solution is to download files from the website: https://github.com/DucLQ92/ffmpeg-kit-audio/tree/main/com/arthenica/ffmpeg-kit-audio/6.0-2
Next, place the aar file in the app/libs folder
Add it to build.gradle:
dependencies {
implementation(files("libs/ffmpeg-kit-audio-6.0-2.aar"))
}
Assemble the project
Switching the Electron Builder appId
back to its original value (like, app.<yourapp>.com
) stopped it from exactly matching the bundle ID in the provisioning profile, so codesign
no longer injected a keychain-access-groups
entitlement and the key-chain prompt disappeared.
The wildcard in the same provisioning profile still covers the app, so the Automatic Assessment Configuration entitlement is honored and assessment mode continues to work.
Reasons why this may occur:
Antivirus removing the file. look at arrow pointed line, the Gradle file is being deleted by anti-virus.
Android studio is not having privileges to write into the location.
Solutions:
Disable third party antivirus completely or uninstall and disable default windows defender/security also. So, it does not interfere the process of moving/creating files by Gradle or android studio.
Run Android Studio as Administrator
Build->Clean project then Assemble Project
Invalidate Cache and Restart Android Studio and Computer as well
We have the same problem.
The issue with Tika when processing PDF do not contain selectable text — they appear to be image-based scans or flattened documents.
When these files are parsed by Tika, the extracted content looks corrupted or unreadable. Even when manually copying and pasting from the original PDF, the resulting text appears as strange or triangular symbols.
Do you have any idea how we could solve this issue?
Ok I think the problem is, that $() variables are not available during yaml compilation, so it cannot work. What I should use here is the "condition:" parameter inside the task instead of the if-statement and access the variable with "$()" or "variables[]" directly instead of passing it as a parameter.
Ensure driver_class is correctly specified (case-sensitive, e.g., com.mysql.cj.jdbc.Driver).
The mobile browser does a lot of speculative prefetching. Since the service worker was setup to act only after a user clicks a link, these prefetch requests are not intercepted and modified. After clicking the link, the browser immediately presents prefetched (unaltered) content, even though another request is sent and intercepted by service worker.