Unsure if "Does Microsoft's azure api exist?", means that a similar endpoint to Openai's realtime api exists? If yes, then sure, that exists. This can be found going from the documentation from Azure (depending on your use case).
If you're referring to the issue that deltas are received at the same point in time, as the transcription is completed, see this nice explanation on the issue:
https://learn.microsoft.com/en-us/answers/questions/5603828/no-user-transcription-deltas-from-azure-openai-rea
Basically: Currently there is an architectural difference between Azure OpenAI Realtime API and OpenAI Realtime API. This results in deltas not being received through Azure OAI Realtime API, whilst the transcription is running. Whereas, if you were to use OpenAI's realtime API directly, the deltas should be available during transcriptions.
Updated question, I am using the commandline client
In case of kidney disease prediction,
KNN classifier gives: Training Score: 77.91;Validation Score: 68.75; Tesing Score: 76.25
Is it overfitting?
What about this:
Calibrated Classifier gives: Training Score: 73.33; Validation Score: 75; Testing Score: 76.25
Why MySQL, though? Postgres with pgvector seems to be far better suited (and far more popular) for vector-DB applications.
Found the problem.
There is an extension Language Support for Java by Red Hat 1.50.0 been installed by VSCode, need to downgrade to V1.47.0
This is the most robust and standard WordPress way. You store your prices in a single location, and then use code/shortcodes to display that value wherever needed.
How it works: You install a plugin like Advanced Custom Fields (ACF) or use a built-in Theme Options Panel. You create a field for each fixed price (e.g., fixed_fee_price).
Update Process: You go to the ACF Options Page (or Theme Options) and change the value of fixed_fee_price once.
Display on Pages: On your pages, instead of using a text shortcode like [price1], you use a shortcode that retrieves the value from that centralized field, like [show_custom_field field="fixed_fee_price"].
If your prices are just simple text, you can create a custom shortcode that returns the current value you've stored in a central location.
How it works:
You define a function in your theme's functions.php file (or a custom plugin) that defines the price:
PHP
function fixed_price_shortcode() {
return '10'; // <--- This is where you set the centralized value
}
add_shortcode('fixed_price', 'fixed_price_shortcode');
On your pages, you use the shortcode: The fixed fee is [fixed_price].
Update Process: To change the price from '10' to '15', you only edit the value in the functions.php file (or custom plugin code) once. All pages using [fixed_price] will instantly update.
There are plugins specifically designed to let you create a reusable content block (like a price listing or a call-to-action) and insert it across many pages.
Examples: Plugins often called "Reusable Blocks" or "Global Content Blocks." Some page builders like Elementor or Divi have Global Modules features that let you design a price element and link all instances to the original.
How it works: You create a Global Block containing the price text. You insert this block on all necessary pages.
Update Process: You edit the content of the Global Block in one place, and it updates everywhere the block is used.
Your proposed solution of linking two separate text shortcodes (price1 and price2) is generally not how web development works. HTML ID elements are meant for styling or unique scripting, not for content synchronization.
The goal is to eliminate the need for price1 and price2 and replace them with a single source, e.g., fixed_fee, that you reference multiple times.
Old Method (Hard to manage)New Method (Centralized)Page A: Price 1: [price1]Page A: Price: [fixed_fee]Page B: Price 2: [price2]Page B: Price: [fixed_fee]Update: Edit Page A, then Edit Page B.Update: Edit the centralized value once.
I recommend starting with Method 1 (Advanced Custom Fields or similar) as it provides the most flexibility, especially since you have different prices per country and fixed fees. You could set up fields for:
global_fixed_fee
us_variable_price
ca_variable_price
Would you like me to find a popular, highly-rated plugin that offers the "Global Content Block" functionality, or should we focus on using Advanced Custom Fields?
I want know which is the best approach to implement this.
How are you using DuckDB in your application? Or are you using DuckDB's command-line tools directly?
...so what problem are you having, exactly?
I would highly recommend reading Steve Smith's Architecting Modern Web Apps... eBook here: https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/
I'm fairly confident you're going to get answers to all of your questions in there.
If you need to send raw bytes over Wi-Fi (not HTTP), most cross-platform frameworks won’t give you this out of the box. NativeScript can do it, but you’ll need to create a small native plugin and call your Java/Kotlin (Android) and Swift/Obj-C (iOS) code from JavaScript.
So yes, it’s technically possible in NativeScript, but if low-level Wi-Fi communication is a core part of your app, then fully native iOS and Android development might be the simpler and more reliable option.
In my case, it was because Unity switched off Debug mode. To re-enable debug mode so the editor will attach, you can
In Visual Studio
Debug -> Attach Unity Debugger
(this will open a pop-up; select your Unity instance)
Unity will warn you that you are trying to switch to debug mode, say yes for this session, and now Visual Studio will be able to attach (until you restart Unity. You can set it to all projects, if you want the perf hit to always have the debug mode on).
In Unity
At the bottom left corner, there is this bug-looking icon (the furthest to the left). You can click on that to enable Debug Mode
Refers to https://www.electronjs.org/docs/latest/tutorial/asar-archives, but for some APIs that rely on passing the real file path to underlying system calls, Electron will extract the needed file into a temporary file and pass the path of the temporary file to the APIs to make them work; If you need to load a file from an ASAR as if it were a normal file on disk, you must copy (extract) it first.
It already has a "grid" mode, so why are you not using that? https://swiperjs.com/demos#grid
The issue occurs because the browser kills the POST when it receives no data from the server for more than 5 minutes (Chrome) or 10 minutes (Firefox) on a browser‑side "no-data" timeout.
I was able to get rid of it on API35 tablet emulator by adding android.window.PROPERTY_COMPAT_ALLOW_USER_ASPECT_RATIO_OVERRIDE property with value "false" for the application in the manifest.
Source: https://developer.android.com/about/versions/14/features/user-per-app-overrides
I finally found the answer from Swashbuckle's official documentation.
Turns out I was very close:
services.AddSwaggerGen(options =>
{
// ...
options.AddSecurityDefinition("bearer", new OpenApiSecurityScheme
{
Type = SecuritySchemeType.Http,
Scheme = "bearer",
BearerFormat = "JWT",
Description = "JWT Authorization header using the Bearer scheme."
});
options.AddSecurityRequirement(document => new OpenApiSecurityRequirement
{
[new OpenApiSecuritySchemeReference("bearer", document)] = []
});
});
Animepahe is a popular official free anime streaming and download website that provides access to a large library of anime shows often in HD quality.
Stackoverflow traffic is reduced. I can see very less questions are coming into stackoverflow. But, good thing is new questions are very specific to niche area or deep problem.
Same here, I couldn't update state in the router function with conditional edge.
How to update a state variable in the router function?
Thanks
If you may excuse me.. I also needed assitance on installing pip on my macbook m2. Here's the last action i did, but there are no intallation made. Can someone help me with this, please. I am new to this, and it's for my work. I appreciate all the help I can get.
You've correctly identified the main factors limiting LS bandwidth:
Maximum Interrupt Poll Rate: $\mathbf{10 \text{ ms (100 Hz)}}$. This is the major bottleneck. This rate is mandated by the USB specification for LS devices.
Maximum Packet Size: $\mathbf{8 \text{ bytes}}$. Also a strict specification for LS (Control and Interrupt endpoints).
Your calculation is spot-on: $100 \text{ packets/sec} \times 8 \text{ bytes/packet} = 800 \text{ bytes/sec}$. You need $\approx 10 \text{ kbytes/sec}$.
Given that you control both ends and compatibility is not an issue, here's an assessment of your options:
Feasibility: Technically possible, but requires significant effort and deviation from the spec.
Mechanism: The Host Controller (i.MX RT1060) determines the polling rate.
On the Host (iMX RT1060): You would need to modify the Host Controller Driver (HCD), specifically the part that manages the Scheduling Table or Frame List. For a full-speed/high-speed controller, the frame interval is 1ms (1000 $\mu\text{s}$). For LS interrupts, the standard dictates they can only be scheduled at intervals of $2^{\text{N}}$ milliseconds, where $N \ge 3$, which means $8 \text{ms}$ or $16 \text{ms}$ (10ms is a common approximation or implementation maximum). You would need to force the HCD to schedule the LS interrupt transfer in every $1 \text{ms}$ frame.
Result: If successful, you could achieve a $1 \text{ms}$ polling rate ($\mathbf{1000 \text{ Hz}}$).
New Bandwidth: $1000 \text{ packets/sec} \times 8 \text{ bytes/packet} = 8 \text{ kbytes/sec}$. This gets you very close to your target!
Feasibility: Not possible.
Mechanism: The maximum packet size for LS Interrupt and Control endpoints is hard-coded into the USB Protocol itself and the hardware of the transceivers and host controllers. This is not a parameter you can change in the driver. The Host Controller (iMX RT1060) is designed to only accept 8-byte packets from an LS device's interrupt endpoint.
Feasibility: Low utility for this specific problem.
Mechanism: You would use the standard HID class (which is available on LS) or define a Vendor-Specific Class (VSC).
Impact: A custom class simply allows you to define your own Report/Transfer structure, but it does not override the underlying protocol limitations of the LS bus, specifically the $8 \text{ byte}$ packet size and the $10 \text{ ms}$ polling interval.
The most promising path is to modify the Host Controller Driver (HCD) on the iMX RT1060 to force a $1 \text{ms}$ polling rate for the LS interrupt endpoint.
Understand the HCD: Identify the specific Host Controller hardware (likely a Synopsys or similar core within the iMX RT1060) and locate the relevant driver code (e.g., Linux's $\text{OHCI/EHCI/xHCI}$ or the bare-metal equivalent provided by NXP).
Inspect the Schedule: The HCD maintains a Frame List or Schedule for every $1 \text{ms}$ frame. It determines which transfers happen when. LS and FS transfers are scheduled in the same $1 \text{ms}$ frame, but LS transfers must respect the $10 \text{ms}$ rule.
Bypass the Check: You need to find where the driver enforces the minimum $10 \text{ms}$ interval for LS interrupt transfers and override that check to allow scheduling in every $1 \text{ms}$ frame slot.
Packet Chaining: On the Device side (LPC55S69), ensure your firmware is ready to service the endpoint immediately with new data. Since you're polling faster than the spec, you'll need to chain multiple 8-byte packets together as quickly as possible. When the host requests a packet, the device should quickly respond with the next 8-byte chunk of data.
Crucial Warning: Forcing a $1 \text{ms}$ poll rate on an LS device increases the Bus Utilization significantly, potentially violating the $90\%$ max utilization rule and possibly causing timing issues or errors. You must thoroughly test for NAKs, CRC errors, and unexpected disconnects.
No, it is not a lost cause.
You are aiming for $10 \text{ kbytes/sec}$.
The theoretical maximum LS bandwidth is $\approx 150 \text{ kbytes/sec}$ (not counting overhead), so the signaling rate can support your traffic.
By pushing the poll rate from $10 \text{ms}$ to $\mathbf{1 \text{ms}}$, you get $8 \text{ kbytes/sec}$, which is very close to your target and likely sufficient with some minor protocol optimizations.
Recommendation: Invest the time in modifying the Host Controller Driver on the iMX RT1060. This is the only change that directly addresses the major bandwidth bottleneck (the polling rate) and offers the most significant payoff.
Tempting to increase the allowed packet size until it meets your bandwidth needs. Provided that you don't require strict adherence to the original standard.
This article mentions: There seems to be a roughly 2.9gb limit on the size of an upcoming pack, and this must be the problem, which I've never encountered before.
Referring to phd's answer, I found I can do push separately. I was pushing a fork of llvm/llvm-project, which consists of about 500K commits. I tried to push via:
git push origin <local-branch-name>~400000:refs/heads/<remote-branch-name>
git push origin <local-branch-name>~200000:<remote-branch-name>
git push origin <local-branch-name>:<remote-branch-name>
This resolved the issue.
who knows!! its hard to say, but software developer will more cheaper, my salary had 50% loss this year
Wikis and documentation are gonna be your best friend. At the end of the day its the same information that the LLM youre asking is trained on, so just going straight to the source will give you the same--albeit more rewarding--result.
In the meantime, I have become convinced this is a browser no-data timeout caused by the fact that the browser receives no data from the back-end for longer than 5/10 minutes. So the options are chunked uploads in the front-end, or some heartbeat sent from the back-end periodically.
You can install the Graphviz Interactive Preview into VS Code.
Then you just right-click the .dot file and choose "Preview Graphviz / Dot (beside)".
thanks @usdn for your reply.
By interval I mean all the consecutive elements higher than half-threshold, yes they are non-zero intervals.
Thanks for the suggestion, however, this is part of a larger project and introduce a new module like pandas or numba just for this issue, unfortunately is not a possible solution.
I can use numpy, scipy and pure Python.
Other people gave you great pointers. I just want to give you some advice: stop focusing on "coding". Personally, I hate the word :) But what I actually hate is that it seems to stress the act of writing the code versus thinking about solving the problem you have (whether it is "how do I send an e-mail to more than one recipient?" or "how do I distribute my application across geographic regions?").
You could say that you can program with sticks and stones - once you know how to solve a problem, only then you start thinking about how to write code that works that way.
If you reach that point, you will be able to write it yourself, maybe not in the best possible way, but totally on your own - except possibly having to look up the parameters of some function call.
In a sense, coding is like cutting a piece of cloth. Great, you can cut it the way you're supposed to cut it: but can you make a dress out of it? That's programming.
@Sergey, Mitochondrial Replacement Therapy (MRT) A child born from 3 parents DNA: https://www.newscientist.com/article/2107219-exclusive-worlds-first-baby-born-with-new-3-parent-technique/
It can happen on Windows if xxx is a very long string and long paths are disabled.
Even fetch won't work.
Running this restores things to working order:
git config --global core.longpaths true
You should've bind chartOptions.dataLabels and chartOptions.colors to your <apx-chart>.
file src/app/app.components.html:
<div id="chart">
<apx-chart
[series]="chartOptions.series"
[chart]="chartOptions.chart"
[labels]="chartOptions.labels"
[responsive]="chartOptions.responsive"
[dataLabels]="chartOptions.dataLabels"
[colors]="chartOptions.colors"
></apx-chart>
</div>
And you have a typo on your colors list.
after changing #000FF00 to #00FF00 it would look like this.
As others already pointed out: eval() is close to evil.
First lesson I ever learned when started coding: All input is evil!
Sounds like you are trying hard to shoot yourself ....
AI will not replace developers, but it will change how developers work.
Simple coding and boilerplate tasks will be automated, while humans will focus more on problem-solving, architecture, debugging, and guiding AI.
To stay relevant, developers should focus on:
Strong fundamentals (DSA, system design)
Problem-solving & critical thinking
AI tool mastery (ChatGPT, Copilot)
Debugging & code review
Domain knowledge
Developers who use AI will replace those who don’t — not the other way around.
If you want to explore more on this topic, you can visit this site: https://bucees-menu.com/
AI will handle simple coding, while humans focus on problem-solving, system design, and reviewing AI-generated code.
To stay relevant, developers should strengthen fundamentals, think critically, master AI tools, and build domain expertise.
You can use PHP/Laravel package for BingX API
I've been stuck on this for 3 days. I'm building a launcher compiled to GraalVM native-image (machine code, no JVM). The launcher decrypts a 400MB JAR file in memory and needs to execute it without writing to disk for security reasons.
I've tried using the RemoteClassLoader approach from this answer, but ClassLoader.defineClass() it doesn't work in native-image since there's no JVM bytecode interpreter. I've also tried embedding JVM via JNI (too complex, 200MB+ overhead) and spawning java -jar with temp files (works but defeats the purpose, writes pure JAR to disk).
Is there ANY way to achieve pure in-memory JAR execution from native-image with zero disk writes? How do enterprise launchers (like game launchers, IDEs) handle this? Are there JVM internal APIs or alternative launching mechanisms I'm missing?
The constraint is: GraalVM native-image launcher -> decrypt JAR in memory-> launch without disk writes. I can bundle a JRE and spawn processes, but the application JAR must never touch the filesystem. I have a working method of writing to temp, it's just simple, but this is not what i want
Any suggestions would be greatly appreciated! I really need help from you all legends.
I have the same issue. Did you find out the solution?
This approach requires a Docker-based solution where both ffmpeg and Playwright run within containerized environments. The key is to configure Playwright to use a custom FFmpeg path instead of its bundled version. Implementation approach: FFmpeg is inherently part of Playwright's architecture, and the standard installation automatically includes these libraries. However, you can bypass Playwright's default FFmpeg installation by directing it to use the system-available FFmpeg instead. To achieve this, you'll need to:
Create a Docker container that includes both FFmpeg and PlaywrightConfigure Playwright to skip its bundled FFmpeg installation. Provide the custom FFmpeg path to Playwright so it references the system-installed version rather than downloading or using its own libraries. Key consideration: while you cannot install Playwright completely without FFmpeg dependencies, you can control which FFmpeg instances it uses. The solution involves instructing Playwright to utilize the FFmpeg binary already present in your Docker environment, effectively overriding its default behavior.
For Firefox, use the plugin GitHub Absolute Dates.
For Chrome, use the plugin GitHub Absolute Dates Plugin.
COOL BRO! 8 YEARS AGO PROBLEM|!!"!""
Thanks for the suggestions. It looks like there is no easy solution for what I want to do. I decided to catch the various localised values on the server and handle them as required. It limits the number of languages I can support but I prefer handling as much as possible server side instead of adding JS to the client.
you can do that. In Docker desktop switch from Apple Virtualization framework to Docker VMM under general options.
This is likely caused by multiple versions of C++ build tools being installed or cmake refusing to generate build files with Visual Studio 15, if you have multiple versions of the build tools installed, uninstall them or upgrade to a newer version Visual Studio preferably 17 2022 or 18 2026
Selection sort! I'm currently using the "Algorithms: Animated Learning" app to learn algorithms; the animations are clear and easy to understand.
Its not entirely same but imagine you run same project on your machine which we call localhost but you have to ran it on different machine(your hosting server) so things will be little different .Because it’s your backend code, you need a server that can actually run Node.js, not just serve files, and your MongoDB also needs to live somewhere accessible (like MongoDB Atlas).”
I thought i was only one facing an eas build. funny enough when i build locally it build successfully and when using eas build --platform android --profile preview it give this error
.kt:33:26 None of the following candidates is applicable:constructor(p0: Context!, p1: Class<*>!): Intentconstructor(p0: String!, p1: Uri!): Intent
This problem can be solved by using CAST () in SQL SELECT query
Something in this way:
SELECT CAST (Id AS UUID) FROM MyEntity;
The answer from Liviu Sosu is really helped me.
Thanks!
I think there is nothing wrong with your backend updateName function its just your page.tsx that never re-rerenders.You can use this function router.refresh() just after this line await updateName(name as string);
Based on what I've observed, even if you don't specify a reporter, Playwright defaults to a built-in one. Instead of using `reporter: [['list']]`, try switching to either 'dot' or 'json' and see what results you get. The 'dot' reporter provides minimal console output, which is more suitable for CI environments, while the 'json' reporter outputs to a file. Since you are running this in Azure Functions, the verbose output from the 'list' reporter might be causing issues. Please test both options and let me know which one works for your deployment.
In my case, the problem was that I had created a new Google account but had forgotten to register it as a tester in Google Play Console. Because I am debugging my app, the account doesn't "just work" - it has to be a registered test account.
I went to Google Play Console: Play Games Services | Setup and management | Testers and added the email address. Now it works fine. Got to say though: it wasn't the most helpful error message.
OK I figured this out... it is <>... So would be:
AND d.[MyValue] <> Foo
this is very usefull information and we offering ahttps://www.infozed.in/ wide range of IT hardware, networking products, corporate stationery, cleaning supplies, and office essentials. With fast delivery, transparent pricing, and dedicated customer support, we help companies streamline their daily operational needs effortlessly.
Please see some guidance below:
1 - Is using an SVG <path> the right approach for enabling rotation and node-based reshaping?
SVG <path> is highly suitable for transforming (rotating and resizing) a cartesian object, the attributes are almost 1 - 1, I'd be surprised if behind the scenes Adobe Illustrator or Inkscape don't use similar technology.
Node-based or vector based reshaping would be achievable using the <path> tag, however difficult, it should be possible to achieve most of the desired functionality.
2 - Are there any recommended patterns or libraries for implementing this functionality?
I'll be describing the mechanisms involved in implementing this functionality:
Upon manipulating a shape in this way, a great number of new UI elements must be introduced to represent the curve (https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorials/SVG_from_scratch/Paths#curve_commands)
These curves will update the path, and other UI elements correspondingly, this project does a similar thing: https://yqnn.github.io/svg-path-editor//
In Playwright, if you want to navigate to a page, the primary method available is `goto()`. However, there are scenarios where you may need to switch the protocol. When you enter a URL in the browser, it internally determines the appropriate protocol to use and navigates accordingly. In Playwright, the situation can be a bit different. To handle such edge cases, you need to check the protocol being used and manage the navigation to the correct URL yourself. Unlike the browser's address bar, which intelligently decides whether to navigate to a domain or search for a string, Playwright requires you to specify the exact URL or domain for testing purposes.
It's interesting to note that the browser differentiates between a domain and a simple string when navigating. If it recognizes the input as a domain, it may return a "404 Not Found" error, while if it identifies it as a string, it directs you to the search engine page. This internal logic helps provide the best possible outcomes during navigation.
You can explicitly declare the character encoding in the XML declaration at the beginning of an XML document using the encoding attribute:
<?xml version="1.0" encoding="UTF-8"?>
Which are the valid xml encoding strings?
Valid XML encoding strings are that identify a character encoding scheme recognized by XML processors. The XML 1.0 specification ensure that all XML processors must support UTF-8 and UTF-16. Encoding names must match with the IANA registered characterset.
https://www.iana.org/assignments/character-sets/character-sets.xml
what is the way of specifying UTF-8?
encoding="UTF-8"
encoding names are case-insensitive so "UTF-8","utf8" are valid.But as per IANA registered name use UTF-8
where I could find a list of the official encoding string?
https://www.iana.org/assignments/character-sets/character-sets.xml
for more reference
The first question would be, what do you want to test, and is toggling Airplane mode the best way to do so?
Having said that. You mention "Platform: iOS (simulators)". If you're specifically looking at iOS Simulators then you need to consider the following:
Simulators don't have Mobile Data, so there is no way to disable this
The "Wifi" connection for Simulators is not controlled through the Simulator, but it uses the host's internet connection, meaning that if you want to disable the connection, you need to disable the connection on your host machine
Some Cloud Vendors have custom options to control the "internet" connection, but they are very limited. So I would "get back to the drawing board" and would ask myself the questions:
What do I need to test/what is the (business/customer) value of what I need to test?
If it is important, do I need to test this with automation, of can I use this case as one of the last manual test cases before releasing?
Pardon my bumping this thread in 2025. Having come into some of the mcu work late and acquiring various older dev boards like due using sam3 processors it has been a wall of frustration trying to get an Atmel-ICE programming/debugger running under Microchip Studio 7.0 (and I suspect the same for Atmel Studio 7.0) it seems that there is not only an issue with what drivers may be installed there is an issue also where apparently the atbackend.exe needs to be run with admin permissions.
This is only a preliminary post as I believe that one of the drivers installed is requiring admin privileges causing the backend to not start properly and hang while attempting to connect to the debugger.
Currently my solution to this problem is to run either Microchip Studio with admin privileges or to start normally, close the atbackend.exe by unchecking in options and to restart it from an admin terminal window. Has worked like a charm since.
When and if I figure out which driver I will update my findings. Until then I hope this will help those of us late comers to Atmel Studio/Microchip Studio working with older non-supported devices and development boards.
Again sorry for beating a near dead thread. Cheers!
Yes — the two private subnets can communicate with each other automatically, even if they are in different Availability Zones, as long as:
1. They are inside the same VPC
2. The route tables allow communication
3. The security groups / network ACLs allow traffic
You do not need internet access, NAT Gateway, or VPC Peering for this.
Communication happens completely within the internal AWS VPC network, which is private and does not touch the public internet.
Think of a VPC like your own private network inside AWS. Every subnet inside it (private or public) can talk to other subnets inside the same VPC — unless you block it with security rules.
Using the template_redirect action hook, you can load a desired template.
add_action( 'template_redirect', function(){
global $post;
if( $post->ID == 'page123' ){
wp_redirect( '/your-slug-here-or-php-file' ); die();
}
} );
Off topic. Try dba.stackexchange.com.
When creating the engine, tell it to use encryption:
engine = create_db_engine(db_url,
connect_args={"sslmode": "require"})
@Drew Reese my concern is that if I create transformer functions which transforms data from API into the format that I need in my front end project, then will it be useful or just a waste of time. I have no experience doing this so I am asking the community if anyone have done this and faced any challenges.
And thank you @Pac0. That last paragraph about keeping component types separate really hit me.
If anyone is looking for a quick solution give this a go - zipkit.io
Before I built this I tried it all, stream, lambda, background workers but each solution for one reason or another didn't scale.
C, C++ and Python:
#include<stdio.h>
#define print(x) int main(){printf(x);return 0;}
print("Hello, World!")
I have converted the structured notes from the lecture transcript into a downloadable Word document (.docx) file for you.
Here is the link to the generated document:
On my side it was because i used the include key of Cargo.toml (https://doc.rust-lang.org/cargo/reference/manifest.html#the-exclude-and-include-fields)
[package]
name = "package"
include = ["folder/"]
warning: ignoring library `package` as `src/lib.rs` is not included in the published package
warning: ignoring binary `package` as `src/main.rs` is not included in the published package
error: failed to prepare local package for uploading
Caused by:
no targets specified in the manifest
either src/lib.rs, src/main.rs, a [lib] section, or [[bin]] section must be present
[package]
name = "package"
include = ["LICENSE", "**/*.rs", "folder/", "Cargo.toml"]
@Wiimm Perl or php could be an option, while php needs installation as sd, and a little heavier than sd I think.
@jhnc Thanks, and so far I haven't find any problem with ${data//"${key}"/"${value}"} , perhaps this is what I want.
Go to Tools → Preferences → Editor and unselect "Show code annotations"
RETURN NULL. Instead you to RETURN a modified NEW. 2) You should also look at ON CONFLICT Clause or for Postgres 15+ Merge. 3) Why not use a UNIQUE index on the column and catch the exception?There are many developers who aren't full-stack. So roles still need to be accurately labeled.
// Source - https://stackoverflow.com/q/67360263
// Posted by hamid
// Retrieved 2025-12-01, License - CC BY-SA 4.0
object ServiceBuilder {
private val client = OkHttpClient.Builder()
.connectionSpecs(listOf(ConnectionSpec.MODERN_TLS) )
.build()
private val retrofit = Retrofit.Builder()
.baseUrl(Config().BASE_URL)# "https://10.0.2.2:5000"
.addConverterFactory(GsonConverterFactory.create())
.client(client)
.build()
fun<T> buildService(service: Class<T>): T{
Security.insertProviderAt(Conscrypt.newProvider(), 1);
return retrofit.create(service)
}
}
@ahsan Sarwar,
Do u have any sharable git repo for you alarm code. I am also encountering multiple issue while making it.
DG Kernel has both front and back-end functionality to do the work. Point selection, creation of new parametric or meshed objects, orientation are simple. STL import/export is there. There are plenty of samples and documentation.
So essentially what you're saying is that the lines between front-end and back-end are now blurred. OK, that still doesn't explain why a position advertised as front-end would also need back-end experience. If back-end experience is needed, shouldn't it be labeled full-stack?
In n8n, binary data is already base64 encoded internally, so you don't need to re-encode it. Your approach of accessing the binary property from the previous node and directly using it as base64 is correct. One thing to double-check is that you access the base64 string via binaryData.data (not just binaryData) since binaryData is an object containing data and metadata. Also, ensure your next node or HTTP request to Odoo sends the base64 string properly in the JSON payload without additional encoding.
@gayathri No, that doesn't work if the early accounts fill both segments. https://dbfiddle.uk/E7C2AbBs
As per the latest documentation of Hangfire, MySql is not listed. you are probably constrained to use either Redis or MSSQL (SQL Server) . See https://docs.hangfire.io/en/latest/configuration/index.html
Alternatively if you want to, you could write your own custom configuration if it is worth is another question.
Or you can also use some community supported storage https://www.hangfire.io/extensions.html#storages
not sure if I understand your question correctly.
then when you select T1, you will see T44
here is an idea I came up with, idk how acceptable it is though lmao:
switch(foo) case 1, 2, 3 ->{
//if true
} default -> {
//if false
}
this could also work:
switch(foo) case 1, 2, 3 ->{
//if true
}
Main problem with it is that it may not accept all data types.
First, flatten the nested RDD and then join with the second RDD. I also merged the duplicates; however, you can skip that step if needed.
joined = (
rdd1
.flatMap(lambda group: group)
.reduceByKey(lambda a, b: a + b)
.join(rdd2)
)
From ikegami's comment with Perl code, I had an idea. GNU C has block expressions, too, so one can do this:
#define rev_comma(lhs, rhs) ({const auto a = (lhs); rhs; a;})
for a reversed comma. It defines a constant a to be the left-hand side, evaluates the right-hand side, then returns a without evaluating the left-hand side again.
CheatEngine’s speed hack works by altering how a game reads time, making the program run faster or slower than normal. It manipulates the system’s timing functions instead of changing gameplay directly. This creates the effect of speeding up actions or slowing them down, mainly for testing or experimentation purposes. https://www.ssoid.site/
When sending content to a WordPress site that uses Elementor, simply posting raw HTML to the 'content' field won't apply Elementor's styling or structure because Elementor stores its layouts as JSON data linked to widgets and sections. To preserve styling, you typically need to submit content in Elementor's proprietary JSON format or use Elementor's Template API, which is more complex. If that's too complicated, a common workaround is to use the Classic Editor for content or the Elementor HTML widget to embed raw HTML, though this limits Elementor's advantages. You can also try programmatically adding the required Elementor post meta and data structure, but that requires deep knowledge of how Elementor stores its layout. For automation, converting your HTML to Elementor JSON would be ideal but is fairly involved. Using a small formatter API might help you transform or validate content formats in your workflow. Overall, if you want full Elementor compatibility, pushing Elementor JSON is the best route; otherwise, fallback to Classic Editor or Elementor HTML widget may be simpler.
@Rhys Tedstone Thanks for the advice, I really appreciate it. The interviewer mentioned that I’m not allowed to ask him questions about the challenge and should figure it out myself, so I’m trying to stay careful. I also checked the desktop app, but it doesn’t support disappearing pictures either. I’ll keep exploring the web client side as you suggested.
I’m doing my best with every opportunity right now. I already tried reverse engineering with BurpSuite, but all the traffic is encrypted, so that approach didn’t work. It seems the solution must be on the client-side, so I’ll focus my efforts there next.
I would highly recommend making it so this question can't easily be found by the interviewers, since that basically defeats the point of a technical interview.
I'd say finding the API in the desktop version would be best, since the logic is already designed for the web.
This seems weirdly in-depth for an "interview" - are you sure that's what the task is asking of you?
Yes, ConfigureAwait(false) must be used in both, each await is independent, which means the foo task does not automatically pass to bar.
Rookie mistake for me. I'd forgotten to reenable debugging within the project settings.
probably shouldn't use Thead.Sleep there.... I think that's going to run client-side, and there's a single thread there. (so I'm guessing the change detection isn't even running in order to update the disabled state...)
You should upgrade your code repository, via the upgrade PR, so that you get a higher version of transforms library (>3.123 should work !)
You appear to be hitting an incompatibility between Docker Engine version 29 and Fabric v3.1.3, v2.5.14 (and earlier). This problem is reported in Fabric issue #5350. Until a version of Fabric is released with a fix, you will need to downgrade your Docker Engine version 28 (or earlier).
I'd usually disable the button inside the function called. (so in your case, the first thing you do in IncrementCount method is disable the submit button, and update it's text/innerHTML if desired... ex: "Working...")
In addition to updating the SDK I also needed to manually switch the C# extension from pre-release to release version then restart vscode
It is a very old question. Still if anybody needs, I used three sub categories of First-Fit-Decreasing (FFD) algorithm for distribution of loads to multiple renewable energy sources:
a) If the loads are very close, e.g. 200, 250, 150 watts etc., -> FFD Average-fit algorithm
b) If the loads are very different, e.g. 150, 20, 300 etc. ->FFD Best-fit or Best-fit algorithm.
c) If the total load is significantly less than the total renewable sources available -> FFD Worst-Fit algorithm.
I could not found any fix for this error : “Failed to run behave for validation. Check that behave is installed and the behave command is configured correctly in preferences.” with Cucumber Plugin 3.0 in latest Eclipse IDE 2025-09 while working for selenium-java project . I have no python setup in this system . I tried below :
WorkAround Tried with No Luck though :
1.Go to Window > Preferences > Cucumber in Eclipse.
2.In the Cucumber preferences, against "Java" option Added the "runner" package of my project.
Another thing tried :
Uninstalled this plugin and installed a cucumber plugin 2.0.0.xxx . That eliminate this error in feature file but error realated to "missing class" started to appear . To fix those I tried different versions of cucumber-java and cucumber-testng versions . But could not fix.
Can any one please help to solve this for Selenium-Java project
front-end isn't always a browser... so maybe they have a front-end built in Java or .NET that also uses a client-side DB?