Thanks to @yurzui, the problem was with the:
spyOn(fakeVerificationSvc, "sendVerificationCode");
This prevents invoking the original method. To spy on the method and run it the .and.callThrough() must be used:
// IMPORTANT !!! => .and.callThrough();
spyOn(fakeVerificationSvc, "sendVerificationCode").and.callThrough();
When compiling in Clipper/Harbor, you need to set the directive to include the correct file in your program.
In your case, the line should be:
#include "<whateverlibraryfileitis.ch"
One thing to add to this discussion. If your RandomFunction() is asynchronous, then the compiler will expect an await usage. Some people seem to use "_ = RandomFunction()" when they could use "await RandomFunction()". The compiler, I assume, replaces "_" with something like "temp-var = await RandomFunction();". It is also, as has been pointed out, marked as uninteresting.
So, the point seems to be to get the compiler to stop barking while saving typing. Maybe there is more to it than this?
You may need to wrap the dates with # symbols, see this discussion. For example, in the rendered query, #11/01/2016#
use this because that way you're actually telling the system from where to import
thanks to Abdulazeez Salihu for providing the info
from website.template import create_app
I figured out the issue. The parameter, in this case, needed to be a single value, so I needed to do a LOD, which I what I ended up doing. The exact LOD I used was:
{ FIXED : DATE(MIN([Gift Date]))}
I had this problem, the reason was that vs code was installed via FLATPACK(No, never do that), that is, paths and access rights would be limited. I reinstalled the solution via the terminal
how to calculate the scale size of sprite as per above info so that sprite look good
If your sprite is rendered by a Sprite Renderer component, and you know the pixel dimensions at which it "looks good", there's a nice trick you can use to "calculate the scale size" corresponding to those dimensions.
Change the Draw Mode to Sliced in the inspector. Then change the Scale to x: 1, y: 1, z: 1. Then input the size that you like under "Size".
Now when you change the Draw Mode to Simple, the Scale will automatically become the way you want it.
The reason also can be an old Python version on the target node. You can check which Python version the target host should have on the support matrix page
| Version | Control Node Python | Target Python / PowerShell |
|---|---|---|
| 2.19 | Python 3.11 - 3.13 | Python 3.8 - 3.13 PowerShell 5.1 |
Change the System.IO.Ports nugget package to 8.0.0, as it is suggested by this similar issue in Microsoft forum: https://learn.microsoft.com/en-us/answers/questions/1621393/system-io-ports-only-availble-on-windows-but-im-us#:~:text=I%20changed%20the%20system.io.ports%20package%20to%20version%208.0.0
It is perfectly normal to have your API's URL in the front end code, it would not work otherwise.
However you should aways make sure that your api is only accesable by authorised users, you can do this by setting up a form of authentication, here is just one aricle i found that explains it, you can easily find other methods if you google api authentication.
The best answer is very dependant on your web app and backend but i would not go live unless you have something like that in place! Something else to consider is make sure your api's won't be exploited. If you use some sort of SQL watch out for SQL injections , etc. Make sure your API return doesn't contain data that the code does not need because that might expose infrastructure that might be exploited!
what do you think about the following example (running in spark-shell)?
scala> spark.range(1).select(current_timestamp().as("ts"), to_utc_timestamp(current_timestamp(), current_timezone()).as("ts_utc")).show(false)
+--------------------------+--------------------------+
|ts |ts_utc |
+--------------------------+--------------------------+
|2025-10-02 15:41:42.104336|2025-10-02 14:41:42.104336|
+--------------------------+--------------------------+
scala>
Pyspark should have the same functions.
You were looking at old documentation (you link is of Entities @0.17), but the current version used in U6.2 is @1.3.1.4 (or 1.4.0-pre.4 if you go experimental). In the package docs link you'll find there's a version dropdown menu which takes you to the documentation of a newer version. Select the aforementioned version and you'll see the current package docs.
As for the package DOTS Editor, it's not needed and it's really old (2021 old). It basically tries to pull in a very early version of Entities. I looked into its functionality and it is basically integrated into Entities now (e.g. baking GameObjects into Entities).
At any rate, just remove the DOTS Editor and perhaps reinstall Entities 1.3.14 and you'll be fine.
The expo-router library greatly disappointed me with its narrow abstraction and lack of support for nested components within the app directory. In Next.js projects, I implement '_extra_' directories alongside pages.
At the moment, it still has a long way to go to match the flexibility of Next.js's file routing. Its only advantage for me is its ability to automate deep linking.
Rollback to React Navigation...
As per the FAQs in the Google Maps API documentation, since the map you provided appears to be in India and you are attempting to display railway tracks or directions, I believe the issue you are encountering is due to the Indian Railway Catering and Tourism Corporation not being supported for transit directions.
There is a specific question in the documentation that asks: “In which countries are transit directions available?” The answer is:
"The Routes API supports all Google Transit partners, except the Indian Railway Catering and Tourism Corporation and those in Japan."
You have to login with a Microsoft account to enable sync, when logged in using github account, checking the synchronization settings (Tools > Options > Environment > Account) showed "Not logged into any account".
To fix this, Click on the picture of your account in the upper right corner > Add another account > Microsoft account
Thanks to a comment, discovered that the button was missing a position within the page, that's why it wasn't appearing.
MQL returns hierarchical JSON, SPARQL uses SELECT/WHERE; translation requires manually mapping MQL structure and Freebase properties to SPARQL.
Solved by adding saving the last position:
private Vector2 lastPos;
private void FixedUpdate()
{
Vector2 v = rb.linearVelocity;
Vector2 pos = rb.position;
if (fixation.up && v.y > 0)
{
v.y = 0;
pos.y = lastPos.y;
}
if (fixation.down && v.y < 0)
{
v.y = 0;
pos.y = lastPos.y;
}
if (fixation.left && v.x < 0)
{
v.x = 0;
pos.x = lastPos.x;
}
if (fixation.right && v.x > 0)
{
v.x = 0;
pos.x = lastPos.x;
}
rb.linearVelocity = v;
rb.position = pos;
lastPos = rb.position;
}
Yes, but you need to stay on the first line only. You can use -- to start your comment. You cannot use accents like "à".
What I did was a simple stupid fix that I didn't knew would ever fix this issue but hey it works so I can't complain. I have found that when adding a .coordinateSpace this fixed the issue in all screens affected by this bug the following code was my fix for this issue.
ScrollView(showIndicators: false) { ContentView() } .coordinateSpace(name: "scrollViewCoordinateSpaceName")
So I hope this can help some people out if you have any more questions feel free to ask happy coding to y'all :)
When randomness is involved in output of a method, instead of checking values, you can validate the structure of the output. For your example, the validation method would check that:
Both the "Autoupdate imported tables on model startup" option and the "Update tables data" button run the "importFromExternalDB()" function from ModelDatabase API (https://anylogic.help/api/com/anylogic/engine/database/ModelDatabase.html#importfromexternaldb). You can also use this function to force data updates at runtime. For a step-by-step guide, check out the Help article: https://anylogic.help/anylogic/connectivity/import.html#access
Usage example: https://www.anylogic.com/blog/importing-data-from-external-database-step-by-step-guide/?sphrase_id=7961888
I have been able to avoid timeouts by using port 587 instead 465 (source of the idea: https://github.com/chatwoot/chatwoot/issues/7869#issuecomment-1921519824). Turns out my provider supports port 587, even though they officially advertise 465 and for some reason 587 works.
open on http://localhost/pgadmin4 - it works fine for ubuntu.
in CNPG check the entries in pg-config.yml, you have to allow IP address or best is create new pod and test everything from there
This might have happened due to using "using namespace std" in multiple header files(.hpp). Ran into this issue and got rid of those lines from all my .hpp's and left one in my .cpp . Don't know why but there must be some kind of conflict happening that causes the compiler to throw this error, you could try this if you don't have a need for them in your headers
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>YouTube Single Play</title>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="https://www.youtube.com/iframe_api"></script>
</head>
<body>
<iframe src="https://www.youtube.com/embed/FKWwdQu6_ok?enablejsapi=1" frameborder="0" allowfullscreen id="video1" width="400" height="225"></iframe>
<iframe src="https://www.youtube.com/embed/FKWwdQu6_ok?enablejsapi=1" frameborder="0" allowfullscreen id="video2" width="400" height="225"></iframe>
<iframe src="https://www.youtube.com/embed/FKWwdQu6_ok?enablejsapi=1" frameborder="0" allowfullscreen id="video3" width="400" height="225"></iframe>
<script>
var players = {};
var currentlyPlaying = null;
function onYouTubeIframeAPIReady() {
$('#video1, #video2, #video3').each(function() {
var id = $(this).attr('id');
players[id] = new YT.Player(id, {
events: {
'onStateChange': function(event) {
if(event.data == YT.PlayerState.PLAYING){
// Pause other videos
$.each(players, function(key, player){
if(key !== id && player.getPlayerState() == YT.PlayerState.PLAYING){
player.pauseVideo();
}
});
}
}
}
});
});
}
</script>
</body>
</html>
According to an Apple Engineer, the change in behavior is caused by Swift Evolution change, namely SE-0444 Member import visibility.
See his official Answer here: https://developer.apple.com/forums/thread/802686?answerId=860857022#860857022
SOLUTION is to import Combine where a class conforming to Protocol ObservableObject is declared.
how are you?
I had a similar problem using this new version of Airflow. I scheduled the dag, but it wasn't executed in the worker and always remained in the queue.
Maybe adding this variable to your compose might help.
AIRFLOW__CORE__EXECUTION_API_SERVER_URL: 'http://airflow-apiserver:8080/execution/'
pip -Vis a quick way to check which Python environment pip is associated with.
Try:
Row(verticalAlignment = Alignment.CenterVertically) {
Column() {
Text("Hello World")
}
Column() {
Text("first")
Text("Hello World")
Text("last")
}
}
Output:
For some reason iOS 26 requires explicit size for titleView. Add width and height constraints to aTitleView and it should appear on screen.
The bug it's setGalleryImg(`img${e.target.id}`);
That sets the state to the literal string "img1" / "img2"… not to the imported image module (img1, img2, etc.). As a result, React tries to load a URL literally called img1, which doesn’t exist, so the image “disappears”.
Use the useEffect to log your useState
This rule for conditional formatting works in my sample sheet. This doesn't require VBA code.
=VLOOKUP(LEFT(E2,SEARCH(" ",E2)-1),$I$2:$J$8,2,0)*0.75<G2
you can’t fully prevent browsers from offering to save passwords, but you can strongly discourage it in your login form with the correct HTML attributes and form setup.
Isn't this whole thing being made needlessly complex?
What is the correct way to provide that callback with the latest state?
The trick is to store the variable outside React and take the value from that, perform whatever needed on that value, and then update the React variable for triggering in useEffect() etc. https://playcode.io/2567999
import React, {useEffect} from 'react';
// Maintain a copy of the variable outside React
var _count = 0
export function App(title) {
const [count, setCount] = React.useState(0);
useEffect(() => {
intervalTimer = setInterval(() => {
console.log(`React count=${count}`)
console.log(`Non React count=${_count}`)
// Latest value is always available in _count outside React
// Perform whatever needed on that value
_count += 1
// Store in the React variable for rerender / useEffect retrigger etc
setCount(_count);
}, 3000);
}, []);
return (
<div>
Active count {count} <br />
</div>
);
}
Unfortunately, borderRadius is only supported for 2D charts in Highcharts, not when you’re using 3D columns (options3d). In 3D mode, Highcharts draws custom SVG shapes (cuboids), so the built-in borderRadius option doesn’t apply.
When creating your Pivot Table select "Add this data to the Data Model".

After creating the Pivot Table make sure that in Pivot Table Options "Show items with no data on rows" isn't selected.

hamming code work even if the two string have'nt the same length, most of your code are false, sorry...
(sorry for my language, i'm french...)
Found a way to make Azure SQL connections using private IP work:
Set the jdbc url to "jdbc:sqlserver://<private_ip>:1433;databaseName=<yourdatabase>;trustServerCertificate=true".
Set the SQL DB username to "<db_user>@<your_azure_sql_server>.database.windows.net".
It needs the SQL server name appended to the user name with an @ to let Azure SQL accept the connection.
There are a few issues with your code.
Firstly, the lifecycle event is
connectedCallback
not
connectedCallBack
and second, don't use regular quotes when you have line breaks.
use template literals instead. So the backticks (` `) instead of regular quotes (' ').
Here is a link that basically explains different methods of querying a database from a Spring Boot application. The last example ("Dynamic Queries Using Specifications") is very much like TypeORM I used in a NestJS application.
Try out mapatlas.xyz they also have an extensive geocoder that you could use. They also have a big free tier and give out grants to promesing projects!
Use
AppInfo.Current.ShowSettingsUI();
i think this way is true (:
i think this way is true (:
i think this way is true (:
i think this way is true (:
i think this way is true (:
<h1>hello world </h1>
This behaviour is actually documented in https://github.com/HangfireIO/Hangfire.InMemory?tab=readme-ov-file#maximum-expiration-time
Specifying the InMemoryStorageOptions fixed the problem. In my case:
builder.Services.AddHangfire(configuration => configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_180)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseInMemoryStorage(new InMemoryStorageOptions { MaxExpirationTime = null })
.WithJobExpirationTimeout(TimeSpan.FromDays(60))
);
Any service marked injectable that you want to import into another module make sure its in the exports array of its own module. Hope that helps
There is no running away from it , either upgrade to higher tier than M0 or run a Mongodb cluster with three replica sets using docker :https://github.com/crizstian/mongo-replica-with-docker .Also add a limit to avoid scanning too many results. Try using "keyword" for symbol.
My bad here. The patches were correctly applied but to a different branch to where I was expecting them. I was expecting them to appear on the devtool branch instead they were applied on the rpi-6.6.y branch which is the correct version my yocto config target. So, yeah I think I still need a better understanding of devtool.
Some Indy files were missing in the original release of Delphi 13.
They have been included in the 1st patch, mentioned here: https://blogs.embarcadero.com/rad-studio-13-september-patch-available/
Thanks for sharing such a detailed MRE!
I think the issue might be in using StaticRouter. Try and use MemoryRouter instead as it is the standard for extensions.
import { MemoryRouter as Router, Routes, Route, Link } from "react-router-dom";
The system can’t start the graphical interface
-> press Ctrl + Alt + F2
-> login with your username & password.
and reinstall the Graphic drivers:
sudo apt-get update
sudo apt-get install --reinstall nvidia-driver- # for NVIDIA
sudo apt-get install --reinstall xserver-xorg-video-intel # for Intel
then reboot
The short answer then is: no, the backing memory being pinned or unpinned doesn't matter for unsafeFreeze or unsafeThaw usage patterns.
You need to select the database connection you'd like to use to run the script.
On the main toolbar, click the menu item highlighted below.
In the dialog box that opens, double click the database connection you'd like to use.
Verified to work with DBeaver Version 25.2.1.202509211659, release date 2025-09-22.
Solved.
I submitted the problem to Copilot, Gemini, and Stack Overflow AI, and none of them were able to find the problem and fix it.
All I had to do was change the first two rules in onEnterRules to this:
...
{
"beforeText": "^((?!\\/\\/).)*\\s*\\$[a-zA-Z0-9_]+\\([a-zA-Z0-9%_]+,$",
"action": { "indent": "indent" }
},
{
"beforeText": "^((?!\\/\\/).)*\\s*.*,$",
"action": { "indent": "none" }
},
...
Basically, I added the end-of-line character $ to beforeText, and everything started working as expected again.
You can do this on your Text
Modifier.graphicsLayer {
rotationZ = 90f
}
Thank you everyone for your helpful feedback. I found the issue which was the section on datatable(rv$mtcars[1:min,1:3]. If the "min" variable is larger than the total number of rows after transposing, the DT rows will disappear entirely. I thought I had accounted for this through the line min<-min(paste(length(colnames(rv$data))), paste(length(rownames(rv$data)))) but I believe that by using "paste", this number was incorrectly calculated and the actual minimum was not correct. By removing "paste" from the calculation, the number was calculated correctly. I was not able to replicate this with mtcars so I don't know the exact cause. But at least the issue is solved.
In case your error handling is strict you might want to add a try construct around your mysqli_connect command:
try {
$conn = new mysqli($server, $user, $pw, $db);
/* do something ... */
} catch (mysqli_sql_exception $e) {
echo "Connection failed: " . $e->getMessage();
}
Nevermind, I forgot the convert the Timestamp column into a datetime format via pd.to_datetime() , which is why it was so slow.
You cannot find problem without crash report. If you cannot connect to you computer then use some Crash reporting library eg: Firebase Crash Reports or Crashlytics.
It happened the same for me, my problem was that I had an unresolved merge conflict. I've pressed on 'Abort' and then I could use again Fetch/Pull UI buttons.
I'm new on this platform and this conversation is a time ago, but I'm still struggling with that question. After I use the functionality database delete, I can still found the Database when I look into the device explorer (using Android Studio). What do I wrong? Thanks.
I think the only "sure" solution is to rewrite the control as C++ for 64bit, but it would be one heck of work.
An easy way to find unmanaged resources is to set up a tag policy, e.g. "managed-by":"terraform" and add that tag to all resources in your terraform manifests. Then manually created resources won't have that tag and you'll find them in the list of non-compliant resources. That assumes, that your users don't manually add that tag to trick you, of course.
Here's how to set up a tag policy in Azure and via the azurerm terraform provider.
That's expected, JBoss already provide the javax.transaction API as part of Java EE, so it will be provided to your application as well as the one you have added to your application. That's why it is a bad practice to add JavaEE jars in your application. It should be in the provided scope of maven.
Just set the height of the div in css. Becuase yo floated the icon, it no longer acts to expand the div. You must put some css on your div like style="height:25px"
Root causes::
CommonJS interop: react-microsoft-clarity is published as CommonJS. With Vite (ESM-first), named imports like import { clarity } from 'react-microsoft-clarity' won’t work. You’ll see “Named export 'clarity' not found.”
SSR/top-level execution: Calling clarity.init at module top-level (e.g., in app/root.tsx) runs during SSR/build or in environments where window isn’t defined, causing “window is not defined.”
Recommended fixes::
Only initialize Clarity in the browser, after mount (inside useEffect).
Dynamically import the package to avoid CJS named export issues and to ensure code only runs client-side.
Use only Vite’s import.meta.env.VITE_* for client-accessible env vars.
Use panzoom extension: https://github.com/PLAYG0N/mkdocs-panzoom
My config in mkdocs.yml
plugins:
- panzoom:
always_show_hint: false
hint_location: "top"
full_screen: true
include_selectors:
- ".mermaid"
- "img"
Source files are included using Kconfig. See this file for the relevant ones for azure_iot_hub. And from here we can see that azure_iot_hub_dps.c is only included if CONFIG_AZURE_IOT_HUB_DPS is set. Have you set this?
If you have set this, check your logs for any Kconfig Override Warnings just in case.
And lastly, I recommend having a look at build/<project_name>/zephyr/.config and look for the configuration, as that is where you see what configurations are set at the end of a build, so if it is set there, you know it is really set.
Oracle has released this driver for gorm.
I came across a handy plugin that lets you easily edit WooCommerce sort option labels and even hide the ones you don’t need. You can check it out here: https://wordpress.org/plugins/sort-options-label-editor-for-woocommerce/
What is the solution to this? It doesn't work.
Lets break down this error:
Error: Error when using WinSCP to upload files: WinSCP.SessionLocalException: The version of: C:\Program Files (x86)\WinSCP\WinSCP.exe (5.19.6.0) does not match version of this assembly
C:\Windows\Microsoft.Net\assembly\GAC_MSIL\WinSCPnet\v4.0_1.8.3.11933__2271ec4a3c56d0bf\WinSCPnet.dll (5.19.5.0).
As you see: One file path has (5.19.6.0) and the other file path has (5.19.5.0)
You need to have matching version .dll files in both paths.
Plus I would make sure your project has matching dll files also.
Title:
Allure report not showing screenshots from TestNG Listener
Body:
I’m trying to attach screenshots in my Allure report using a TestNG ITestListener.
Here is my listener code:
@Attachment(value = "Screenshot", type = "image/png")
public static byte[] attachScreenshot(WebDriver driver) {
return ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES);
}
@Override
public void onTestFailure(ITestResult result) {
WebDriver driver = DriverManager_For_Jenkins.getDriver();
if (driver != null) {
attachScreenshot(driver);
}
}
In my LoginPageTest, I also added:
@Listeners({
RetryListener.class,
ScreenshotListener.class,
io.qameta.allure.testng.AllureTestNg.class
})
public class LoginPageTest extends CommonToAllTest {
// test methods
}
and in testng.xml:
<listeners>
<listener class-name="io.qameta.allure.testng.AllureTestNg"/>
<listener class-name="org.example.listeners.RetryListener"/>
<listener class-name="org.example.listeners.ScreenshotListener"/>
</listeners>
But when I generate the report with:
allure generate target/allure-results --clean -o target/allure-report
allure open target/allure-report
→ The report runs fine, but no screenshots are shown in the “Attachments” tab.
How can I correctly attach screenshots to Allure reports from a TestNG listener?
The problem was caused by duplicate listener declarations.
I had added ScreenshotListener and RetryListener both in testng.xml and in the test class via @Listeners annotation.
Because of this, the listener was being triggered multiple times and Allure wasn’t attaching the screenshots properly.
Keep the listeners only in one place.
If you want them to apply globally → declare them in testng.xml.
If you want them only for specific classes → use @Listeners annotation.
Do not use both together.
So I removed the @Listeners annotation from my test class and only kept the listeners in testng.xml:
<listeners>
<listener class-name="io.qameta.allure.testng.AllureTestNg"/>
<listener class-name="org.example.listeners.RetryListener"/>
<listener class-name="org.example.listeners.ScreenshotListener"/>
</listeners>
After running tests and regenerating the report, screenshots started appearing under the “Attachments” section in Allure. ✅
What you are looking for is i18n-ally, a VSCode extension that will help you manage your translation files with numerous options. Give it a try.
Handler dispatch failed: javax.xml.stream Factory Configuration Errer: Provider for class javax.xml.stream.XMLInputFactory cannat be created
Please solve stion my problem
The default timeout in Apigee is 55seconds until your backend returns a response. If the backend takes longer than this, you may experience 504 Gateway timeout errors. Backend latency could be due to blocked processes or any other reason. Hope this is helpful.
The only solution that works without hacking jquery ui is to add an event listener to the mousedown event of the year drop down:, eg
$(document.body).delegate('select.ui-datepicker-year', 'mousedown', function(){
var select = $(this);
var opts = select.children().get();
if ($(opts).first().val() < $(opts).last().val()){
select.empty().html(opts.reverse());
}
});
fmt: 244 //video itag
afmt: 251 //áudio itag
seq: 3 //this is the 3rd qoe msg
bh: 10.039:30.764 //buffer healty. The buffer level at 10 sec is 30 sec
Is there a way of doing this without an if-statement?
No.
"in either order" means that there are multiple possible result patterns.
There are 2 "in either order", so number of results is 4. However in this question case, the number of patterns to consider is 2. That is, swap target selection.
The results are not necessarily the same, and which pattern is valid depends on the inputs. So, it looks like that It is inevitable to select the pattern to be adopted based on the inputs.
(Of course, depending on the language you use, you may be able to write code that uses something other than if as a branching method, but it is not interest here.)
I had the same issue. The problem persisted even when trying the previous recommendations. Found that the issue was happening when the Listbox's property "ListStyle" was selected as "1 - frmListStyleOption". Hope this can help someone
It sounds like the error is only occurring in certain sheets, even though the same function is present across multiple ones. That usually points to something in the specific sheet context rather than the function itself. A few things you might want to check.
Change While to lowercase while because Java keywords are case-sensitive.
To be able to log in to Google in a production application, don't forget to enable the OAuth consent screen for production.
You must visit https://console.cloud.google.com and go to Menu → APIs & Services → OAuth consent screen.
Then select the "Audience" side menu and change the Publishing status to production. The review process can take up to 4-6 weeks.
textapply my permanent id log in account vince casuba have password detail tuchilak,and all my certification id is collecting my apps and station credit institution bank account
How to install that package? you should try
pip install pytest-warnings
chatFirebaseModels
.compactMap { element in
guard element.status != cancelled else { return nil }
return element
}
conda config --add channels bioconda
conda config --add channels conda-forge
conda config --set channel_priority strict
The Conda channel configuration order must be set before installing multiqc, according to official documentation (see the link below for more details).
ffmpeg.exe -i input.mp4 -map 0:v -c:v copy -map 0:a -c:a:0 copy -map 0:a -c:a:1 aac -b:a:1 128k -ac 2 output.mp4
-map 0:v select all video from stream 0
-c:v copy copy this video to first video output stream
-map 0:a select all audio from stream 0
-c:a:0 copy copy this audio to first audio output stream
-map 0:a select audio from stream 0 (again)
-c:a:1 aac convert audio to aac for second audio output stream
-b:a:1 128k set bitrate of second audio stream to 128k
-ac 2 set channels of second audio stream to 2
As you can see, audio must be selected twice, the first time to copy it, the second time to convert and output it to a new stream.
Console will show:
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Stream #0:1 -> #0:2 (ac3 (native) -> aac (native))
Short answer: No—you can’t change it. That leading chunk before .environment.api.powerplatform.com is an internal, immutable host prefix assigned to each environment/region. It isn’t your environment GUID and it isn’t something you can rename or shorten.
The most straightforward approach is to define `SPDLOG_ACTIVE_LEVEL` before including spdlog headers:
```c++
#define SPDLOG_ACTIVE_LEVEL SPDLOG_LEVEL_TRACE
#include <catch2/catch_all.hpp>
#include <spdlog/spdlog.h> int bar(int x)
{ SPDLOG_TRACE("{} x={} ", _PRETTY_FUNCTION_, x); return x + 1; }
TEST_CASE("Bar") { REQUIRE(bar(3) == 4);
} ```
in 2025, Rails 8, Use url('logo.png')
will be rewritten to use asset pipeline with latest sprockets-rails automatically
Hello if you have problem with flickering if it has redrawing then you can check my improvement against flickering.
Check out this post!
Enjoy your development!
Was able to get it working
by creating:
src/api/profile/controllers/profile.ts
export default {
async update(ctx) {
try {
const user = ctx.state.user;
if (!user) {
return ctx.unauthorized("You must be logged in");
}
const updatedUser = await strapi.db.query("plugin::users-permissions.user").update({
where: { id: user.id },
data: ctx.request.body,
});
ctx.body = { data: updatedUser };
} catch (err) {
console.error("Profile update error:", err);
ctx.internalServerError("Something went wrong");
}
},
};
src/api/profile/routes/profile.ts
export default {
routes: [
{
method: "PUT",
path: "/profile",
handler: "profile.update",
config: {
auth: { scope: [] }, // requires auth
},
},
],
};
then on Roles "users & Permissions Plugin"
scroll down to find api::profile plugin
http://localhost:1337/admin/settings/users-permissions/roles/1
then enable "Update" on Profiles.
for the request:
PUT {{url}}/api/profile
header: Authorization: bearer <token>
body { "username": "updated name" }
It's working but I'm not sure if this was the recommended way.
If anyone has better answer please give. Thank you
A better way you can send an array could be as follows
const arr = [1,2,3,4,5];
const form = new FormData();
arr.forEach(element => {
form.append('name_of_field[]', element)
});
const result = await axios.post("/admin/groups", form, {
headers: { 'Content-Type': 'multipart/form-data' }
});
I am answering this question without a clear understanding of your deployment method, but I'll assume that:
As you mention in your question, it seems wise to separate the internal data from the program itself, and ideally work with 50MB executables, and a compressed 650MB internal data zip.
I would advise that when your executable runs for the first time: you check the existence of your internal data at a predefined location such as C:\Program Files\MyAppDeps v3_internal (as you pointed out in your question). If this data does not exist it is installed from an index to that specified location. Of course, you add version-checking logic that ensures the existing data is up to date, and if not, you let the use know they should update the data using your application when appropriate.
You could also have your executable check if it is up to date with version on the index and follow the same logic as above.
I hope this was useful, please let me know if I should expand on this solution.
If you want the requirements.txt to include only top dependencies you can use: pip list --not-required --format freeze > requirements.txt
In my case, I forgot to set the environment name of where my environment secret was defined, which cause my action not being able to access the environment secrets.
jobs:
build:
environment: github-pages
runs-on: ubuntu-latest
I've been having a similar issue @user51
Your screen recording isn't available anymore but in my case there appears to be an issue with the way the data export rule handles tables that follow the default workspace analytics retention settings.
Assuming you setting up the data export rule via the Azure UI here is a workaround that worked for me.
Open the Settings > Tables menu of your Log Analytics Workspace
For any Tables you wish to export that have an Analytics retention settings that follows the workspace default do the following:
Return to setting up your data export rule.
I have a private support case open with Azure support for this issue so I will update here if they respond with anything actionable.