THIS IS NORMAL!!!
You need to use Custom definitions!
The values from user properties are passed to the event only if the value is not equal to the previous one, so it turns out that if there are no changes, then user_properties is present only in the first event.
To multiply user_properties to all events when uploading to Bigquery, you need to add the desired fields from user_properties to Custom definitions
In GA4 console go to:
Admin -> Data Display -> Custom definitions -> Create custom definitions
After so many trial and errors I think the base reason for this behavior is mostly for renaming the jwt fields when creating them so I used this line of code before generating the claims:
JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();
And now JwtSecurityTokenHandler write tokens and read them with their default names.
I figured out the auth endpoint for getting a token. It's not https://auth.euipo.europa.eu/oidc/accessToken, it's https://euipo.europa.eu/cas-server-webapp/oidc/accessToken . So you were using the wrong endpoint. The website doesn't say it, but I got it from their API java file on their website at https://dev.euipo.europa.eu/product/trademark-search_100/api/trademark-search#/Trademarksearch_100/overview .
Preventing a refresh can be accomplished by putting the following script in your main layout page.
<script>
document.addEventListener('keydown', (e) => {
e = e || window.event;
if (e.keyCode == 116) {
e.preventDefault();
}
});
</script>
You can use this tool to format/beautify your JSON.
It's possible that your issue is caused by the API handling timezones differently in local vs. development environments. For example, when running on localhost, it might use your system's local timezone, while in the dev environment it could be using the server's timezone (often UTC or whatever the host is configured with).
See this answer - the solution is that you need to emit declarationMap files (.d.ts.map) into your dist directory as well as the other files.
The layout works correctly. The second-to-last row has (almost) no height in the example, so it can look like it's missing, especially using CSS libraries that collapse or otherwise reset default styles. Adding CSS height to all <tr> shows that it's working as expected.
Run Strapi locally on your PC (npm run develop).
Make schema/content-type changes.
Run npm run build.
Upload updated code to cPanel and restart Node app.
Please refer to the following article.
This lists the configuration steps for the authorization code as well as the client credentials flow.
https://community.snowflake.com/s/article/Connect-from-Power-Automate-Using-OAuth
If this is happening with render() or similar, try to add something like:
return render(request, self.template_name, {'form': self.get_form()})
# ^^^^^^^^^^^^^^^^^^^^^^^^^
Still don't know why, but the compilation is possible only with GNU LD being replaced by LLD linker. For the command from the question to succeed, we need to add --fuse-ld=lld option:
# clang++-18 --target=x86_64-pc-windows-gnu --std=c++23 ./test.cpp -o ./test.exe --static -lstdc++exp -fuse-ld=lld
Hi Have you fixed it? I also have the same problem. There is no
import carla
import random
import time
def main():
client = carla.Client('127.0.0.1', 2000)
client.set_timeout(2.0)
world = client.get_world()
actors = world.get_actors()
print([actor.type_id for actor in actors])
blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.filter('vehicle.*')[0]
spawn_points = world.get_map().get_spawn_points()
# Spawn vehicle
vehicle = None
for spawn_point in spawn_points:
vehicle = world.try_spawn_actor(vehicle_bp, spawn_point)
if vehicle is not None:
print(f"Spawned vehicle at {spawn_point}")
break
if vehicle is None:
print("Failed to spawn vehicle at any spawn point.")
return
front_left_wheel = carla.WheelPhysicsControl(tire_friction=2.0, damping_rate=1.5, max_steer_angle=70.0, long_stiff_value=1000)
front_right_wheel = carla.WheelPhysicsControl(tire_friction=2.0, damping_rate=1.5, max_steer_angle=70.0, long_stiff_value=1000)
rear_left_wheel = carla.WheelPhysicsControl(tire_friction=3.0, damping_rate=1.5, max_steer_angle=0.0, long_stiff_value=1000)
rear_right_wheel = carla.WheelPhysicsControl(tire_friction=3.0, damping_rate=1.5, max_steer_angle=0.0, long_stiff_value=1000)
wheels = [front_left_wheel, front_right_wheel, rear_left_wheel, rear_right_wheel]
physics_control = vehicle.get_physics_control()
physics_control.torque_curve = [carla.Vector2D(x=0, y=400), carla.Vector2D(x=1300, y=600)]
physics_control.max_rpm = 10000
physics_control.moi = 1.0
physics_control.damping_rate_full_throttle = 0.0
physics_control.use_gear_autobox = True
physics_control.gear_switch_time = 0.5
physics_control.clutch_strength = 10
physics_control.mass = 10000
physics_control.drag_coefficient = 0.25
physics_control.steering_curve = [carla.Vector2D(x=0, y=1), carla.Vector2D(x=100, y=1), carla.Vector2D(x=300, y=1)]
physics_control.use_sweep_wheel_collision = True
physics_control.wheels = wheels
vehicle.apply_physics_control(physics_control)
time.sleep(1.0)
if hasattr(vehicle, "get_telemetry_data"):
telemetry = vehicle.get_telemetry_data()
print("Engine RPM:", telemetry.engine_rotation_speed)
for i, wheel in enumerate(telemetry.wheels):
print(f"Wheel {i}:")
print(f" Tire Force: {wheel.tire_force}")
print(f" Long Slip: {wheel.longitudinal_slip}")
print(f" Lat Slip: {wheel.lateral_slip}")
print(f" Steer Angle: {wheel.steer_angle}")
print(f" Rotation Speed: {wheel.rotation_speed}")
else:
print("there is no telemetry data available for this vehicle.")
if __name__ == '__main__':
main()
I have been using astronomer and its free https://www.astronomer.io/docs/astro/cli/get-started-cli/
I myself failed the error, was stuck on it for about a week. It's a simple fix though, just install the ongoing python version from microsoft store. Installing python requires no subscription it's free.
I tried to embed the whole superset behind a nginx ssl proxy and a apache httpd as a microservice controller via iframe in frontend.
Could not get it working by proxying by url like /superset/ even with all cookies, headers, prefix and networks properly set in a docker environment. Would interfere with other urls all the time.
What did the trick was to remove nginx and give httpd and the other micro services ssl directly on board.
It required me to install flask-cors. Then setting the "HTTP_HEADERS = {'X-Frame-Options': 'ALLOWALL'}" also starting it with gunicorn instead of superset itself.
But boy did I wrap my head around this.. lost 2 weeks nearly.
A quick and readable answer to obtaining all bitflags of a [Flags] enum BitFlags is
BitFlags bitFlags = Enum.GetValues<BitFlags>().Aggregate((a, b) => a | b);
For improved code quality, it should be put into a method. But unfortunately, doing that is a pain. The simplest way I found needs reflection and object-casting.
using System.Linq;
static class EnumExtension<T> where T : struct, Enum
{
public readonly static T AllFlags;
static EnumExtension()
{
var values = typeof(T).GetFields().Where(fi => fi.IsLiteral).Select(fi => (int)fi.GetRawConstantValue());
AllFlags = (T)(object)(values.Aggregate((a, b) => a | b));
}
}
It can be used as EnumExtension<BitFlags>.AllFlags and is only computed once for each enum, thanks to the static constructor.
C#14 comes with static property extensions for types, so we hopefully could write BitFlags.AllFlags then using an extension block.
Git uses the Myers Diff Algorithm. This is a link to the original paper. Here is a Python code and interactive visualization from the Robert Elder's Blog. James Coglan also has a series of articles in his blog about it. Here is a table of contents of the series:
As funny as it may seem, read your errors carefully. For example, I had an error in SSL certificate verification, and the problem was solved when I turned off vpn
You've got a trained YOLOv8-seg model, but you're feeding your entire image into a second classifier. Stop doing that. You're poisoning your classifier with useless noise from the background. You need to isolate the muzzle, and there are only two ways to think about it.
This is the fastest method. YOLO gives you a bounding box—a simple rectangle. You use its coordinates to crop the original image. It’s quick, but it's sloppy. You're still including background pixels that aren't part of the muzzle. It's better than nothing, but we can do better.
This is the method you should be using. Your YOLOv8-seg model provides a precise pixel mask for the muzzle. You use this mask to create a new image where every single pixel that is not the muzzle is blacked out.
The result? An image containing only the muzzle pixels. Zero background noise. You are feeding your classifier exactly what it needs to see and nothing more.
The verdict is simple: If you're serious about accuracy, use the mask. Cropping with a bounding box is a shortcut that leaves performance on the table. Isolate your object properly and stop feeding noise to your models.
I faced the same issue. Cypress was not able to detect the Chrome browser installed in my Windows system.
I have Windows 11, two versions of Cypress v13.6.4 and v12.10.0 across projects.
I had installed the Chrome browser for 'current user'. It was installed in the User's profile folder. Cypress was not able to find the Chrome browser.
Later, I installed Chrome browser for 'all users'. This time Chrome was installed in 'C:\Program Files\Google\Chrome\Application' folder. After this Cypress was able to detect Chrome browser in my system.
Try installing Chrome for 'all users'.
If you want remove entirely use this
Woocommerce > Settings > Enable coupons
Remove checkbox from Enable the use of coupon codes and then submit
Thanks to the help I received in the comments I understood that maybe I was being too clever with the default trait method and the blanket impl. I could achieve the same in a much simpler way just by moving the method implementation to the trait impl:
pub trait IntoMyError {
fn into_my_err(self, id: String) -> MyError;
}
impl<T: Into<anyhow::Error>> IntoMyError for T {
fn into_my_err(self, id: String) -> MyError {
MyError {
id,
kind: MyErrorKind::from(anyhow::Error::from(self)),
}
}
}
After consulting with a colleague, it became obvious that AWS AppRunner is ignoring the HOSTNAME env variable set up in the docker container itself and is binding to a host it cannot reach.
Thus setting HOSTNAME=0.0.0.0 in the AppRunner env variables resolves this issue.
conda install -c conda-forge libstdcxx-ng
then create osmesa context
I think you need to change the error reporting of PHP
error_reporting(E_ALL & ~E_NOTICE);
I would stick to the 12 factor app guidelines. Specifically have on mind:
So do not use any environment.properties. It will make unable you to run your old build application on new environment, without introducing given env.file and make it assembled into your application.
But honestly, I am searching for answer to you question also (from a technological point of view, how to do it). I am used for spring and its configuration, which does work nicely (can be set up by properties file inside application, outside of application; can be overridden by command line parameter, environment variable and even it can be loaded from zookeeper, consul and others...).
I wish, there is an equivalent way, how to do it in jakarta/java ee like this.
This post is hidden. It was deleted 13 days ago by Cody Gray♦. Already in progress but taking forever
https://github.com/flutter/flutter/issues/153092
Haters gonna hate
This is the status when they update the issue i will update solution here stop policeing the posts for no reason.
This is becoming ridiculous google payed bots blocking people from accessing information. Stack overflow was once opened for discussion and improv
I stumpled upon the same problem yesterday, so I used Benyamin Limanto workaround to create this trait
https://gist.github.com/gstrat88/6b39a232a57cf217ed8e94b8dfbe30cb
I was able to get the desired output by replacing
this.dataSource.data.slice();
with
JSON.parse(JSON.stringify(this.dataSource.data));,
which performs a deep copy.
I also changed the HTML from:
<div role="group">
<ng-container matTreeNodeOutlet></ng-container>
</div>
to:
<div role="group" *ngIf="treeControl.isExpanded(node)">
<ng-container matTreeNodeOutlet></ng-container>
</div>
to conditionally render the child nodes only when the current node is expanded.
If the problem still isn't solved, you can check this StackOverflow link for help:
In the end I had decided to use a cron job to check every x amount of time, which handles the deletion.
Try docmosis or aspose -> Both are paid versions
You physically changed the location of the files you created on your PC, meaning when you try to run the same program from your Android device without making necessary changes to the code, you are certainly bound to bump into errors with the program. For this, I would recommend you create your project using GitHub, that way you can also access your account via your phone and PC, and hence you can pull and push your projects using different devices, so long as you are logged into your account, with ease
Let's be honest, we've all seen a regression model get stuck stubbornly predicting the mean, and it's a classic sign of a logic trap, not a code bug. Your model's real problem is that you're handing it a "bad" image along with the very tension value that created that bad result, but you're asking it to magically guess the optimal tension. It has no data for what "optimal" looks like, so it plays it safe and guesses the average to minimize its error. The game-changer here isn't a more complex model; it's reframing the question. Stop asking for the final answer and instead, train your model to predict the one thing it can actually learn from the image: the adjustment needed to get from bad to good.
Function Prova(ParamArray args() As Variant)
Dim result As String
Dim i As Integer
result = ""
For i = LBound(args) To UBound(args)
result = result & args(i) & " "
Next i
Prova = Trim(result)
End Function
Using OnlineTextTools’ Text Replacer, you can’t directly detect formatting like bold text in a cell—it only works with plain text. You’d need to preprocess your text elsewhere to mark bold parts, then use the tool to add “;” before them.
Here is the correct solution with Bs5
<div class="col-xs-12 col-md-6">
<p>text</p>
</div>
<div class="col-xs-12 col-md-6 order-first order-md-last">
<img src="" />
</div>
If you're utilizing Excel and want a formula to be added automatically with the help of Excel VB (Visual Basic), you can do it using a small code. That means, every time you type something in a row or column, Excel will automatically fill in a formula for you—like doing the math on its own, without you typing it again and again.
For example, if you want Excel to automatically calculate total = price × quantity, you can write a small VB script that tells Excel: “Whenever I type something in column A and B, put the formula in column C.” It's like teaching Excel a rule once, and it remembers it every time.
After re-reading the doc, I tried with imap_unordered() and it worked. So only the code of do_stuff() needed to be altered and it now looks like this :
def do_stuff():
pool_size = 8
p = multiprocessing.Pool(pool_size)
p.imap_unordered(self.foo, range(pool_size))
I can just say it's a typical symptom of a firewall blocking your connections from your Gatling instance to your target system under test.
Plotly doesnt support visible link labels in Sankey charts `link.label` doesnt do anything
If you want to show values on the links you need to manually add annotations using `layout.annotations`. Heres a basic idea
```
const annotations = links.map((link, i) => ({
x: 0.5, // placeholder - estimate midpoints in real case
y: 0.5,
text: link.value.toString(),
showarrow: false,
font: { size: 12 }
}));
Plotly.newPlot(chartDiv, [trace], { annotations });
```
Youd need to estimate x and y per link based on node possitions not perfect but works
maybe Plotly adds native support this in the future 🤷♂️
ভিডিও একটা বানায় দাও from moviepy.editor import ImageClip, concatenate_videoclips, AudioFileClip
# ছবি লোড করুন (আপনারই ছবি নাম দিয়ে)
girl = ImageClip("girl_photo.jpg").set_duration(3).fadein(1).fadeout(1)
boy = ImageClip("boy_photo.jpg").set_duration(3).fadein(1).fadeout(1)
# ভয়েস-লোড (copy একটি mp3 রাখুন same folder)
music = AudioFileClip("romantic_music.mp3").subclip(0, girl.duration + boy.duration)
# ক্লিপ বানান
video = concatenate_videoclips([girl, boy], method="compose").set_audio(music)
video.write_videofile("romantic_video.mp4", fps=24)
The setTimeout function in JavaScript is a powerful tool that allows developers to introduce delays in their code execution. It’s commonly used for animations, asynchronous operations, and scenarios where you need to schedule a function to run after a certain amount of time has passed. However, there’s an interesting behavior when using a delay of 0 with setTimeout() that might seem unexpected at first. In this article, we’ll explore the concept of setTimeout() with a delay of 0 and understand how it behaves.
The Basics of setTimeout()
Before diving into the behavior of setTimeout with a delay of 0, let’s briefly recap how the function works. The setTimeout function takes two arguments: a callback function (the code you want to execute after the delay) and the delay time in milliseconds.
Here’s the basic syntax:
setTimeout(callbackFunction, delayTime);
When you use setTimeout, the JavaScript engine sets a timer to wait for the specified delay time. After the delay expires, the provided callback function is added to the message queue, and the JavaScript event loop picks it up for execution when the call stack is empty.
The Curious Case of Delay 0
Now, here’s where things get interesting: using a delay of 0 milliseconds with setTimeout. At first glance, you might assume that passing a delay of 0 would result in the callback function running immediately. However, this is not the case.
When you use setTimeout(callback, 0), you’re actually instructing the JavaScript engine to schedule the callback function to be executed as soon as possible, but not immediately. In other words, the function is placed in the message queue just like any other asynchronous task, waiting for the call stack to clear.
Example:
Let’s illustrate the behavior of setTimeout() with a delay of 0 using a practical example:
console.log("Start");
setTimeout(function() {
console.log("Callback executed");
}, 0);
console.log("End");
In this example, you might expect the output to be:
Start
Callback executed
End
However, due to the asynchronous behavior, the actual output will be:
Start
End
Callback executed
Why Use a Delay of 0?
You might wonder why anyone would want to use `setTimeout` with a delay of 0 if it doesn’t execute the function immediately. The reason lies in JavaScript’s single-threaded nature and its event-driven architecture. By using a delay of 0, you allow other tasks, such as rendering updates or user interactions, to take place before your callback is executed. This can help prevent blocking the main thread and ensure a smooth user experience.
Conclusion :
Using setTimeout() with a delay of 0 might seem a bit surprising, but it’s an important concept to grasp in JavaScript’s asynchronous world. It allows you to effectively schedule a task to be executed as soon as the call stack is clear, without blocking the main thread. This can be particularly useful for scenarios where you want to defer a function’s execution until the current execution context has finished. As you journey through JavaScript, this trick will be your secret to creating smoother, glitch-free web experiences. Happy coding!
I faced a similar problem, after adding the tailwind classes to the html elements and then saved the file while the live server is running, the browser doesn't reflect the changes until I manually reload the browser.
You might be using one of the two popular live server extension, either Live Preview by Microsoft or Live Server by Ritwick Dey
To fix this issue for both of them -- Go in the extension settings and configure it as shown below
For Live Preview by Microsoft --
Live Preview --> Settings --> Auto Refresh Preview --> change it to "On changes to saved files" --> Restart the live server and you are good to go
For Live Server by Ritwick Dey --
Live Server --> Settings --> Full Reload --> Enable it --> Restart the live server and it will start working
In your implementations, add the bean name.
public class OldFeatureService implements FeatureService{
@Service("oldFeatureService")
public String testMe() {
return "Hello from OldFeatureService";
}
}
public class NewFeatureService implements FeatureService{
@Service("newFeatureService")
public String testMe() {
return "Hello from NewFeatureService";
}
}
By default, springboot uses the bean with @Qualifier("OldFeatureService"). When the flag is enabled, it looks for the alterbean=newFeatureService
I found the perfect solution!
A tried so many tools (online converters, pandoc, md-to-pdf, grip, vscode extensions). None of them produced exactly same formatting as in GithHub.
What I did was edit the HTML directly in the browser to leave only the desired part of the window visible.
So I just opened markdown file in github -> opened developer console -> found the html-element with my content -> copy the element -> hide parent element with all content in the page -> insert my copied element near -> ctrl + P -> print to pdf.
P.S.: I suppose this solution only works well for relatively small files. 🙂
In short, you can't.
From docs:
Immutable Map is an unordered Collection...
Iteration order of a Map is undefined...
Source:
I have found the following works:
<rules>
<logger name="*" minlevel="Info" writeTo="logfile" />
<logger name="*" minlevel="Trace" maxLevel="Debug" writeTo="logfile" >
<filters defaultAction="Ignore">
<when condition="'${event-properties:item=Category}' == 'MyCategory'" action="Log" />
</filters>
</logger>
</rules>
However, it feels very clunky, especially having to restrict the categories with a maxLevel to prevent duplication, so I'm sure there must be a better way?
I started exporting classes with the example from the docu here: Exporting classes with type accelerators
I could achieved all my demands by doing so. Give it a try.
Disable Node Auto-Provisioning (NAP) - avoiding automatic changes or unexpected resource allocation.
Cordon and drain node - Cordoning a node marks it as unschedulable, preventing new Pods from being assigned, while draining evicts the workloads running on those nodes without disruption.
Delete actual node
If you want to keep using NAP it is recommended to use a custom boot disk and SSD.
https://github.com/mkubecek/vmware-host-modules/issues/306#issuecomment-2843789954
This patch solved this problem
Maybe this need for someone...
man hier
“Try running pip install python-telegram-bot in your terminal. If it doesn’t work, make sure pip is updated using python -m pip install --upgrade pip, and that you're not using a restricted environment like school/college PC with admin rights disabled.”
In Delphi, you can assign a component’s property values before the main form is created by modifying the code in the project (.dpr) file — create the form manually using Application.CreateForm, then set the properties before showing it.
function isZeroArgs(func: Function): func is () => unknown {
return func.length === 0;
}
function sayHello() {
return "Hello!";
}
function greet(name: string) {
return `Hello, ${name}!`;
}
if (isZeroArgs(sayHello)) {
sayHello(); // OK
}
if (isZeroArgs(greet)) {
greet();
}
To build a SaaS that sends emails for clients using your services (SMTPwire, WarmupSMTP, DedicatedSMTPserver):
Set up SMTP backend (yours or allow client’s)
Create web dashboard (login, campaigns, stats)
Add domain auth (SPF, DKIM, DMARC setup)
Build email editor + scheduler
Use job queues for sending control
Show analytics (opens, clicks, bounces)
Add billing system for plans
White-label for resellers
Done right, it runs fully on your own infra — high control, high margins.
You can do it like by subclassing:
type
TADOConnection = class(Data.Win.ADODB.TADOConnection)
protected
procedure Loaded; override;
end;
implementation
procedure TADOConnection.Loaded;
begin
StreamedConnected := False;
inherited;
end;
It seems like external libraries might not be allowed, but there is a chance that I could use python-pptx in the python_user_visible environment, though I’m not sure. Alternatively, I could create an HTML skeleton for the user to download and use with a tool. Maybe a better approach would be to provide a CSV or JSON script with a timeline, text, and voice references, as many text-to-video platforms support imports in those formats. I’ll generate a CSV with this approach, allowing for easy importing.
echo entered | awk '{printf "%s", $1}'
maybe?
I found out an answer. I made a function that uses noise to check if position above a point is not a part of a land.
func is_point_on_surface(pos: Vector2i, surface_up_vector: Vector2i) -> bool:
# surface_up_vector should be negative because
# noise goes in direction +x and -y <--- that's why
var neighboor_pos = pos + (block_size * -surface_up_vector)
var noise_val: float = noise.get_noise_2dv(neighboor_pos * 0.1)
return noise_val > min_noise_val
We reassigned the VPN host to an address outside that range, and the workflow is green again.
If you want a one-liner, no extra Sub, Function or Dim, this is it (replace S with your string or variable):
CreateObject("htmlfile").ParentWindow.ClipboardData.SetData "text", CVar(S)
The code works well to concatenate the 2 columns. How would you add a 'space' between the ranges such as in First_Name " " Last_Name " ? When I try to add the space, I get type mismatch
.Evaluate("=B:B & " " & D:D") ' Doesn't work with space added. Gives type mismatch
.Evaluate("=B:B&D:D")
In ROS, roslaunch uses the max_exposure parameter to set the maximum exposure time for a camera sensor. This controls how long the sensor collects light, helping to optimize image brightness and prevent overexposure in bright environments.
The issue likely stems from how Visual Studio runs the application in a different runtime environment compared to PowerShell or a direct EXE execution. While the credentials and identity are the same, Visual Studio may not propagate Windows authentication tokens correctly or might introduce differences in TLS settings or request headers. This can lead to a 500 Internal Server Error from the Azure DevOps Server REST API. Running the app directly outside Visual Studio, setting SecurityProtocol to TLS 1.2, and comparing network requests using Fiddler can help identify the root cause.
In addition, this guide helps: https://youtu.be/3_CV_zXyExw?si=SjLvDuaqZjQXuR_Z
Apparently, I may have found an answer to my question after some tests. This gives the intended output
:::{.column-screen}
:::{.column-screen-inset-left}
Some Text 1
:::
:::{.column-margin}
Some Text 2
:::
:::
See below:
This issue was related to new Databricks feature - executor broadcast join https://kb.databricks.com/python/job-fails-with-not-enough-memory-to-build-the-hash-map-error, so to overcome it needed to disable executor broadcast.
Using Databricks notebook autocomplete we found class which contains all databricks-related configurations - com.databricks.sql.DatabricksSQLConf.
Inspecting this file public members we found setting which disables executor broadcast join - spark.databricks.execution.executorSideBroadcast.enabled.
Disable of executor broadcast resolved our problem - no problem with broadcasting anymore and AQE works fine.
It is too bad that Databricks has a lot of properties which affect query execution, but they are not documented.
I directly spoke with an engineer from the hosting provider, instead of relying on the general tech support representative who initially suggested upgrading the hosting plan. After further debugging and investigation, the engineer confirmed that cPanel does not support the necessary runtime environment — specifically:
Native binary support is limited
WASM execution is often restricted
V8 runtime access is restricted in shared hosting environments
Given these limitations, I decided not to upgrade to a VPS. Instead, I deployed the bot using Google Cloud Run, which allows us to scale more flexibly as the user base grows. The deployment was straightforward. I simply containerized the project using Docker and deployed it.
While Cloud Run may be more expensive at large scale, it provides a scalable and efficient solution that fits our current needs.
I would prefer BleuIO. its a smart BLE usb dongle that helps create BLE application easily with its AT command on the device
Seems like a bug to me, do you accept a workaround where we save the flextable as HTML and take a webshot screenshot?
library(dplyr)
library(flextable)
library(webshot)
test <- data.frame(
Spalte1 = 1:5,
Spalte2 = letters[1:5],
Spalte3 = c(TRUE, FALSE, TRUE, FALSE, TRUE),
Spalte4 = rnorm(5))
tmp <- tempfile(fileext = ".html")
test %>% flextable() %>%
add_header_row(colwidths = c(2,2), values = c("eins", "zwei")) %>%
align(align = "center", part = "all") %>%
border_remove() %>%
vline(j = 2) %>%
save_as_html(path = tmp)
webshot::webshot(
tmp,
file = "test.png",
vwidth = 300, # play with the width
vheight = 200, # play with the height
zoom = 2 # controlls resolution
)
unlink(tmp) # cleanup temporary html
giving
you can try BleuIO. it works for both central and peripheral role. you can mock any BLE device with it. The AT command available on the device makes it easy to work with BLE application.
Try BleuIO. it works both central and peripheral role. easy to work with its AT command available on the device
You can try BleuIO. it works both central and peripheral role.
I would prefer BleuIO that comes with AT commands on the device. works on any platform and easy to work with Bluetooth Low Energy
Building on the answers from @norlihazmey-ghazali here's what's working for me, with explanation below:
let isCleanedUp = false;
async function cleanUp() {
// waiting for some operations to be done
isCleanedUp = true;
}
app.on("before-quit", (event) => {
if (!isCleanedUp) {
event.preventDefault();
cleanUp().then(() => app.quit());
}
});
The callback function for the before-quit event is not asynchronous. Passing an async function is the same as passing a synchronous function that returns a pending promise.
async function asynchronous1() {
// ...
}
function asynchronous2() {
return new Promise((resolve, reject) => {
// ...
resolve();
});
}
Calling either of those two functions outside of an async context returns a promise that can either be stored somewhere or handled whenever it settles.
function synchronous() {
const pendingPromise = asynchronous1();
// synchronous code, promise is still pending
pendingPromise.then(() => {
// inside this context the promise has settled
});
}
There's no indicator in Electron documentation that the callback is handled as an asynchronous function. So no matter what the callback returns, Electron will continue with its synchronous code without waiting for a returned promise to settle.
Using the code from earlier, it could look something like this:
function async callback(event) {
console.log("called callback");
if (!isCleanedUp()) {
console.log("not cleaned up yet");
event.preventDefault();
await cleanUp();
console.log("quitting after cleanup");
app.quit();
}
}
function electronHandling(event) {
console.log("call all before-quit handlers");
callback(event);
if (isDefaultPrevented(event)) {
return;
}
console.log("call all quit handlers");
// ...
console.log("closing the app");
// ...
}
Seeing that electronHandling is synchronous, the output of above code looks like this:
call all before-quit handlers
call all quit handlers
closing the app
called callback
not cleaned up yet
quitting after cleanup
With a small adjustment in the callback you can make the execution order more obvious:
function callback(event) {
console.log("synchronous callback");
return new Promise((resolve, reject) {
console.log("called callback");
if (!isCleanedUp()) {
console.log("not cleaned up yet");
event.preventDefault();
cleanUp().then(() => {
console.log("quitting after cleanup");
app.quit();
resolve();
});
}
});
}
This second callback will produce the following output:
call all before-quit handlers
synchronous callback
call all quit handlers
closing the app
called callback
not cleaned up yet
quitting after cleanup
The action of a promise is executed as a microtask. While the handling of calling the registered callbacks is synchronous, there might still be some asynchronous tasks in the Electron code that allow microtasks to execute before the app has fully quit. So it is possible that a well placed console.log() is giving a microtask enough time to run a cleanup that's not within the javascript thread even though it is not awaited properly. Which is not fun to debug by adding logs, so prefer the proper solution over one that's working by chance
Hibernate and NHibernate both are ORM but Hibernate is used in Java but NHibernate is used in .NET
For NHibernate, Use can use it from NuGet (https://www.nuget.org/packages/nhibernate) in your .NET project.
You can learn more on this here : Learn Nhibernate
How does the microcontroller keep track of where a malloc will point to in the heap?
This is implementation defined, malloc will be in some library and compile like any other function. Here is one implementation you can look at: https://github.com/32bitmicro/newlib-nano-1.0/blob/master/newlib/libc/stdlib/malloc.c
Usually, malloc stores a header full of information (like number of bytes allocated) next to each assignment.
After free(x), are subsequent mallocs able to use the memory that was allocated for x or is it blocked because of malloc for y?
free(x) frees up the memory for x, y has nothing to do with it.
For clarity, each call to malloc returns a new pointer. But if x is freed, that memory region could be returned again.
You also don't need to cast the pointer to char*, and it's recommended not to!
This is how you do it:
server.use-forward-headers=true
using the above property google sign-in issue (/login?error) in scala spring-boot application has been resolve.
org.springframework.security.oauth2.core.OAuth2AuthenticationException: [invalid_redirect_uri_parameter]
at org.springframework.security.oauth2.client.authentication.OAuth2LoginAuthenticationProvider.authenticate(OAuth2LoginAuthenticationProvider.java:110) ~[spring-security-oauth2-client-5.1.5.RELEASE.jar:5.1.5.RELEASE]
Simple way - Using 'Select - action' for Columns and 'For each' - action to get combined Rows to get final Result: enter image description here
Workflow - Run Result
Workflow Description:
Manual Trigger added
Parse JSON - action to fetch your received data
Select - action added, to fetch the columns: to read 'name'
to read 'name' : select- range(0,length(first(outputs('Parse_JSON')?['body']?['tables'])?['columns']))
map - first(outputs('Parse_JSON')?['body']?['tables'])?['columns']?[item()]?['name']
4. Initialize an variable - append to get out put
5. Added for each action to read 'rows' datas towards selected item in the earlier step
6. Compose to get - final Primary Result
for each - first(outputs('Parse_JSON')?['body']?['tables'])?['rows']
select - range(0,length(body('Select_column')))
map - body('Select_column')?[item()] Vs items('For_each_rows')?[item()]
The Python development headers are missing. The file Python.h is part of python3-dev. It must be installed, try:
python --version # say python 3.13
sudo apt install python3.13-dev # install the appropriate dev package
I was facing a similar issue because I was trying to update kafka-clients module to 3.9.1 on it's own.
I managed to get it working by forcing all modules in group org.apache.kafka to 3.9.1 instead of just kafka-clients module on it's own.
The error "cannot get into sync" and the subsequent related ones appear when the correct serial port is not selected. Can you check and verify the correct serial port is selected. In the IDE, you can find it in Tools-> Port . In most of the cases, /dev/ttyACM3 should be selected.
There is an answer on your question. In short: use the socat instead. Pros: it has no an (obligatory) time lag till it quits.
THIS IS NORMAL!!!
You need to use Custom definitions!
The values from user properties are passed to the event only if the value is not equal to the previous one, so it turns out that if there are no changes, then user_properties is present only in the first event.
To multiply user_properties to all events when uploading to Bigquery, you need to add the desired fields from user_properties to Custom definitions
In GA4 console go to:
Admin -> Data Display -> Custom definitions -> Create custom definitions
When you have an authentication-enabled app, you must gate your Compose Navigation graph behind a “splash” or “gatekeeper” route that performs both:
Local state check (are we “logged in” locally?)
Server/session check (is the user’s token still valid?)
Because the Android 12+ native splash API is strictly for theming, you should:
Define a SplashRoute as the first destination in your NavHost.
In that composable, kick off your session‐validation logic (via a LaunchedEffect) and then navigate onward.
@Composable
fun AppNavGraph(startDestination: String = Screen.Splash.route) {
NavHost(navController = navController, startDestination = startDestination) {
composable(Screen.Splash.route) { SplashRoute(navController) }
composable(Screen.Login.route) { LoginRoute(navController) }
composable(Screen.Home.route) { HomeRoute(navController) }
}
}
SplashRoute Composable@Composable
fun SplashRoute(
navController: NavController,
viewModel: SplashViewModel = hiltViewModel()
) {
// Collect local-login flag and session status
val sessionState by viewModel.sessionState.collectAsState()
// Trigger a one‑time session check
LaunchedEffect(Unit) {
viewModel.checkSession()
}
// Simple UI while we wait
Box(Modifier.fillMaxSize(), contentAlignment = Alignment.Center) {
CircularProgressIndicator()
}
// React to the result as soon as it changes
when (sessionState) {
SessionState.Valid -> navController.replace(Screen.Home.route)
SessionState.Invalid -> navController.replace(Screen.Login.route)
SessionState.Loading -> { /* still showing spinner */ }
}
}
NavController extension
To avoid back‑stack issues, you can define:
fun NavController.replace(route: String) { navigate(route) { popUpTo(0) { inclusive = true } } }
SplashViewModel@HiltViewModel
class SplashViewModel @Inject constructor(
private val sessionRepo: SessionRepository
) : ViewModel() {
private val _sessionState = MutableStateFlow(SessionState.Loading)
val sessionState: StateFlow<SessionState> = _sessionState
/** Or call this from init { … } if you prefer. */
fun checkSession() {
viewModelScope.launch {
// 1) Local check
if (!sessionRepo.isLoggedInLocally()) {
_sessionState.value = SessionState.Invalid
return@launch
}
// 2) Remote/session check
val ok = sessionRepo.verifyServerSession()
_sessionState.value = if (ok) SessionState.Valid else SessionState.Invalid
}
}
}
SessionRepository Pseudocodeclass SessionRepository @Inject constructor(
private val dataStore: UserDataStore,
private val authApi: AuthApi
) {
/** True if we have a non-null token cached locally. */
suspend fun isLoggedInLocally(): Boolean =
dataStore.currentAuthToken() != null
/** Hits a “/me” or token‑refresh endpoint. */
suspend fun verifyServerSession(): Boolean {
return try {
authApi.getCurrentUser().isSuccessful
} catch (_: IOException) {
false
}
}
}
Single source of truth: All session logic lives in the ViewModel/Repository, not in your UI.
Deterministic navigation: The splash route never shows your real content until you’ve confirmed auth.
Seamless UX: User sees a spinner only while we’re verifying; they go immediately to Login or Home.
Feel free to refine the API endpoints (e.g., refresh token on 401) or to prefetch user preferences after you land on Home, but this gatekeeper pattern is the industry standard.
It’s possible, but not ideal. Installing solar panels on an aging or damaged roof may lead to future complications. It’s best to assess your roof’s condition first — and often, replacing or restoring the roof before solar panel installation saves time and money in the long run.
The error "unsupported or incompatible scheme" means that the key you're trying to use for signing the quote does not have the correct signing scheme set, or is not even a signing key.
To fix this, you must create the application key with a signing scheme compatible with the TPM's quote operation, like TPM2_ALG_RSASSA or TPM2_ALG_ECDSA, and mark it as a signing key.
Matplotlib is always plotting objects according to the order they were drawn in, not their actual position in space.
This is discussed in their FAQ, where they recommend an alternative, that is MayaVi2, and has very similar approach to Matplotlib, so you don't get too confused when switching.
You can find more information in this question, that I don't want to paraphrase just for the sake of a longer answer.
When you produce your data in that Kafka Topic, use message key like "productId" or "company/productId", this will garantee that each product will be produced in the same partition, and that will garantee for you the order of processing data of each product.
there is no such parameter in this widget.
you should specify exact post number (id) and unfortunately telegram does not support many post types like music by its widget anymore and says see the post in telegram and it is not supported in the browser.
You can disable the service in cloud run function.
Just manually change the number of instance to 0
reference: https://cloud.google.com/run/docs/managing/services#disable
Upon reading Sandeep Ponia's answer, I went checking node-sass releases notes where I noticed my version of node was no longer compatible with the version of node-sass used by some dependencies' dependencies; I had updated to node v22.14.0 but node-sass was still running on v6.0.1 which only supports up to node v16.
Since it's not a direct dependency but a nested dependency, to solve this issue, I updated my package.json to override node-sass version in the devDependencies, which is a feature available since node v16:
{
...
"overrides": {
"[dependency name]": {
"node-sass": "^9.0.0"
},
"node-sass": "^9.0.0"
}
}
I got this error too. Using Macos. It turned out this had to do with the ruby version in some way (I use rvm to manage the versions). This 'cannot load such file -- socket' message appeared when using ruby 2.4.2, but when I changed the used ruby version to 2.6.6, everything was installed just file.
import React from 'react';
import Box from '@mui/material/Box';
export default function CenteredComponent() {
return (
<Box
display="flex"
justifyContent="center"
alignItems="center"
minHeight="100vh"
>
<YourComponent />
</Box>
);
}
For anyone wondering about this:
The solution I found is actually quite simple. Within your node class, create an additional node subservient to the main one
...
public:
sub_node_plan = rclcpp::Node::make_shared("subservient_planning_node");
...
private:
std::shared_ptr<rclcpp::Node> sub_node_plan;
And define the client for this sub_node. This way you can spin the client until result is there and avoid any deadlock or threading issues.
If your table contains many overlapping dates, instead of recursively
To put it all toghether from @grawity answer and the Post I linked in the first Post.
Clone old repo
Clone new repo
cd into new repo
git fetch ../oldRepo master:ancient_history
git replace --graft $(git rev-list master | tail -n 1) $(git rev-parse ancient_history)
git filter-repo --replace-refs delete-no-add --force
Then i pushed it to an newly created Repository.
I tried to do the same thing using pybind11. It worked perfectly. Couldn't make it work for boost for some reason. Frustrating