This is exactly why we are building Defang . With Defang, you can go directly from Docker Compose to a secure and scalable deployment on your favorite cloud (GCP / AWS / DigitalOcean) in a single "defang compose up" command. Networking, compute, storage, LLMs, GPUs - all supported. Take a look and give us your feedback - https://defang.io/
I finally found a solution that worked for me!
just follow these steps:
List the process using port 8081:
lsof -i :8081
Kill the process using its PID:
kill -9 (PID)
Example:
kill -9 19453
For me, the simplest method is to use the slice() function from the String JS class.
const str = 'This is a veeeeeery looooong striiiiing';
console.log(str.slice(0, 7) + "\u2026"); // Output: "This is..."
You need to install wasm tools, just run
Dotnet workload install wasm-tools
This should fix it.
I am using curl with the -n switch. I have validated the login information in my .netrc file (been using .netrc forever).
I always get "* Basic authentication problem, ignoring."
What gives??
Selenium isn’t able to detect or interact with system-level popups like the "Save As" dialog because it only operates within the browser’s DOM. These popups are actually controlled by the operating system, not the browser, which puts them outside Selenium’s reach. To work around this, people often turn to external tools like AutoIt, WinAppDriver, or PyAutoGUI. A better solution, tho, is to configure the browser to automatically download files by adjusting its settings to skip the prompt altogether. Another reliable and often more efficient option is to download the file directly using its URL with HttpClient in C#.
When you open QuickWatch and it is off the visible screen, hit ALT-SPACE, then 'M', then any arrow. After that, if you move your cursor, the QuickWatch window will follow it. You can drag it back from whatever unviewable dimension it was in before. This sequence works for any window that has focus. You can practice on a visible window before you're forced to use it on an invisible one.
Where was this entered to have it sort properly?
Inspired from @Aia Ashraf's answer:
import 'dart:ui' as ui;
Future<ui.Image> svgToImage(String assetPath, {int width = 100, int height = 100}) async {
final pictureInfo = await vg.loadPicture(SvgAssetLoader(assetPath), null);
final image = await pictureInfo.picture.toImage(50, 50);
return image;
}
Then apply it to your paint like this:
final off = Offset(width, height);
canvas.drawImage(image!, off, solidFill);
Below worked for me
auth_manager=airflow.providers.fab.auth_manager.fab_auth_manager.FabAuthManager
it seams your divider has no width, you can add it to the Container:
SizedBox(
height: 10,
child: Center(
child: Container(
margin: EdgeInsetsDirectional.only(start: 1.0, end: 1.0),
height: 5.0,
width: double.infinity,
color: Colors.red,
),
),
),
Using the google.accounts.id.prompt callback handler to take action is one good option. You may decide to conditionally display a dialog if a credential is returned or enter a different auth flow if not. isDisplayed is not supported by FedCM, so info on whether the model is shown to the user is not available to web apps. You'll want to remove any code that currently depends upon isDisplayed state.
You cannot do this. The answer above is incorrect, at least on Quartz version 3.8.0.0. Having two quartz espressions in one just ignores all of the additional second line.
To completely decouple the yaml file from your flow, you can create a separate flow that is triggered by a scheduler, e.g. every hour to refresh the file. inside the scheduled flow, you write the content of the yaml file into a non-persistent Object Store.
Whenever you need to access the file from any flow, you can simply load it from the Object Store by it's defined key.
There is now a BigDecimal package: https://pub.dev/documentation/big_decimal/latest/big_decimal/BigDecimal-class.html
Snowflake lacks integrated reporting capabilities because it is a data warehouse. To generate reports, you'll need to use a business intelligence tool like Excel, Tableau, or Power BI
If you're running into the "SessionNotCreatedException: user data directory is already in use" error with Selenium Edge, try setting up a fresh, temporary user data directory for each session or just leave out the --user-data-dir argument altogether. Running Edge in headless or incognito mode can also help prevent profile conflicts. Make sure you always close the driver properly with driver.quit() after saving your table. If you're using Great-Tables, it's a good idea to pass in a custom Selenium driver that's set up this way. And don’t forget to double check that your Edge, EdgeDriver, and Selenium versions all match up.
You're able to install https://github.com/davidodenwald/prettier-plugin-jinja-template which will follow the same templating rules as django-html in most cases.
I am getting the same error.. I have attached GitHub repo link here can anyone here help me out on this ??
Thank you so much
I was pulling hair out and OpenAI, Gemini, Grok, and Claude all failed to give this resolution.
They were all pretty good, but leaning towards a bug with Apple.
A standard PayPal Subscriptions integration with the JS SDK will open a modal/mini window for payment. You can generate a button at https://www.paypal.com/billing/plans to see the experience.
According to this Google Issue Tracker post, comment #3.
These issues are all related to underlying data on Google Finance and are not specific to Sheets or the
=GOOGLEFINANCE()formula.
The meaning of this is that it's due to a data issue on Google's side. It seems the issue with missing data for some stock tickers in Google Sheets, such as HKG:2800, isn't due to an error in your formula or Google Sheets itself. The problem is likely that Google Finance, which provides stock data to Sheets, doesn't have reliable or up-to-date information for those specific tickers. Therefore, even if your formula is correct and works for other stocks, if Google Finance lacks valid or current data for a particular ticker, the formula cannot return any results. Therefore, the problem isn’t with Google Sheets or the formula syntax.
This another post contains a suggestion that addresses this issue. It mentions that "Please use the Send Feedback button in Google Finance to report data issues. Please verify your ticker symbol. If Google Finance has the correct data, but =GOOGLEFINANCE() formula is failing, please send feedback directly within the Google Sheets Application menu. Help > Report a problem"
Yes 1 unit operates on 1 warp in lockstep, but warps can be swapped with context switch. Obviosly, usually there are a lot more warps then warp schedulers, so they will also go sequentialy. In theory GPU can put threads in warp depending on what brach they go??? (idk but seems like a viable option, because constant branches are eliminated, and usually dont make any effect with modern compilers and GPUs)
I can not imagine device with different branches. It can be possible, but as long as you have smaller number of schedulers and bigger number of warps to process, that make only half of sense, because you switch the whole warp, and you will need to finish the longest branch then. Sure it will then make smaller latency, because you can execute both branches concurently, but still wont eliminate the other problem.
As long as you are doing the same operations in the same order in different branches, and just use different data, it should be okay and perform same instructions for all of threads without stalls or computing both variants.
The last thing Im generally curious about is that, can GPU architecture allow threads swaps in warps, then sure there will be even better possibilities in branches and whatever. Also dont take my whole statements as complete thruth, I also dont know that much, may be better to look at AMD, as they have more open(to look at) architecture.
Behind this motive relating to u for the reason prior take an example qwote just like mention forwarded as caesar wife should have to be suspiciOUS, whether such those kill himself or not by doubtful ends within complete also compete contious on wards by forwarding actions so on 💐🎂🙃😌🙂😁😊😔🌹🙄🥲👌😋🤣🤣🤣🥺🤔👍🫂
Why not define not() yourself?
not <- function(x) !x
TRUE |> not()
Your best bet is to use Twscrape with several Twitter accounts and rotate the auth_tokens. This spreads out the load and helps you get around the 600-tweet-per-day limit tied to each token. Browser automation tools like Playwright or Selenium are also solid options since they mimic real user behavior, though they tend to be slower and a bit more complex to set up. Using proxies is a smart move too, cuz it helps mask your IP and lowers the chances of getting flagged. And if you're not into coding, Apify’s Twitter Scraper is a great plug-and-play option that handles pagination for you.
It is unclear as to why or how, but simply starting and stopping the Directory Services Authentication Scripts found here permanently resolved the issue. The issue will still manifest after the start-auth script runs; it's only remediated after the stop-auth script is executed. Now that the issue is resolved, I have no way to test or determine which specific command in the script is the key.
Sure:
await page.locator('input field locator').pressSequentially('text to enter', { delay: 100 }) //the delay is optional, can be adjusted to be slower or faster
Select the string key in the base language table and press the delete key on your keyboard.
Looks like removing the user and adding back to the DB solved the issue for us.
I think I got it to work by changing cancelTouch function:
const cancelTouch = (e: TouchEvent) => e.cancelable && e.preventDefault();
I have just created a project fixing their code. I will publish it to Maven soon.
No compose is needed.
https://github.com/ronenfe/material-color-utilities-main

I am working with CATS and am having the same issue, does the same solution apply and if so which file would I need to edit.
For me resyncing the project with Gradle files helped. I didn't modify anything, because the code worked before I shut down the system. I am using the latest Android Studio (Meerkat | 2024.3.1 Patch 2), but it is still an issue.
To configure WebClient to use a specific DNS server or rely on the system's resolver, you'll need to customize the underlying HttpClient's resolver. If you want to stick with the system DNS, use DefaultAddressResolverGroup.INSTANCE, which follows your OS-level settings. To set a custom DNS server (like 8.8.8.8), create a DnsAddressResolverGroup using a DnsNameResolverBuilder and a SingletonDnsServerAddressStreamProvider.
If you're working with an HTTP proxy, make sure it's properly set up in HttpClient and supports HTTPS tunneling through the CONNECT method. In more locked-down environments, it's a good idea to combine proxy settings with the system resolver and enable wiretap logging for better reliability and easier debugging.
I had some annotations with normalized values above 1, like "bottomX": 1.00099
in my case: Apache 2.4 win64, PHP 8.1.1 win32 vs16 x64 the problem was solved by the following: copy libsalsl.dll from php to apache/bin
You cannot prevent the way that the Android system works. You need to handle your own session and state per the Designing and Developing Plugins guide.
I decided to just use Umbraco Cloud to host. I've recreated the site on there. The most likely issue was in my views I was referencing content ID's that were only on my local database. I noticed and resolved this in Umbraco Cloud which was cheaper for hosting anyway
Sounds like a bug described in this [github issue](https://github.com/Azure/azure-cli/issues/17179) where `download-batch` doesn't distinguish between blobs and folder entries. It lists everything in the container and then attempts to incorrectly download "config-0000" as a file, and it writes a file with this name to your destination dir. Then it does a similar thing with "config-0000/scripts", but "config-0000" is a file, and that's where the "Directory is expected" error message comes from.
A possible work around that might have worked for you is to specify a pattern that wouldn't match any of your folders in blob storage like: `--pattern *.json`.
So with the hint with the functions from @Davide_sd I made a generic method that allows me to pretty easily control how the sub-steps are split up. Basically, I'm manually deriving the functions I split off, but much like cse, keep the results in a dictionary to share among all occurences.
The base expression that make up the calculation are input and never modified, the derivation list is seeded with what you want to derive (multiple expressions are ok), and it will recursively derive them, using the expression list as required.
At the end, I can still use cse to a) bring it into that format should you require it, and b) factor out even more common occurences.
It works decently well with my small example, may update it as I add more complexity to the function I need derived.
from sympy import *
def find_derivatives(expression):
derivatives = []
if isinstance(expression, Derivative):
#print(expression)
derivatives.append(expression)
elif isinstance(expression, Basic):
for a in expression.args:
derivatives += find_derivatives(a)
elif isinstance(expression, MatrixBase):
for i in range(rows):
for j in range(cols):
derivatives += find_derivatives(self[i, j])
return derivatives
def derive_recursively(expression_list, derive_done, derive_todo):
newly_derived = {}
for s, e in derive_todo.items():
print("Handling derivatives in " + str(e))
derivatives = find_derivatives(e)
for d in derivatives:
if d in newly_derived:
#print("Found derivative " + str(d) + " in done list, already handled!")
continue
if d in derive_todo:
#print("Found derivative " + str(d) + " in todo list, already handling!")
continue
if d in expression_list:
#print("Found derivative " + str(d) + " in past list, already handled!")
continue
if d.expr in expression_list:
expression = expression_list[d.expr]
print(" Deriving " + str(d.expr) + " w.r.t. " + str(d.variables))
print(" Expression: " + str(expression))
derivative = Derivative(expression, *d.variable_count).doit().simplify()
print(" Derivative: " + str(derivative))
if derivative == 0:
e = e.subs(d, 0)
derive_todo[s] = e
print(" Replacing main expression with: " + str(e))
continue
newly_derived[d] = derivative
continue
print("Did NOT find base expression " + str(d.expr) + " in provided expression list!")
derive_done |= derive_todo
if len(newly_derived) == 0:
return derive_done
return derive_recursively(expression_list, derive_done, newly_derived)
incRot_c = symbols('aX aY aZ')
incRot_s = Matrix(3,1,incRot_c)
theta_s = Function("theta")(*incRot_c)
theta_e = sqrt((incRot_s.T @ incRot_s)[0,0])
incQuat_c = [ Function(f"i{i}")(*incRot_c) for i in "WXYZ" ]
incQuat_s = Quaternion(*incQuat_c)
incQuat_e = Quaternion.from_axis_angle(incRot_s/theta_s, theta_s*2)
baseQuat_c = symbols('qX qY qZ qW')
baseQuat_s = Quaternion(*baseQuat_c)
poseQuat_c = [ Function(f"p{i}")(*incRot_c, *baseQuat_c) for i in "WXYZ" ]
poseQuat_s = Quaternion(*poseQuat_c)
# Could also do it like this and in expressions just refer poseQuat_s to poseQuat_e, but output is less readable
#poseQuat_s = Function(f"pq")(*incRot_c, *baseQuat_c)
poseQuat_e = incQuat_s * baseQuat_s
expressions = { theta_s: theta_e } | \
{ incQuat_c[i]: incQuat_e.to_Matrix()[i] for i in range(4) } | \
{ poseQuat_c[i]: poseQuat_e.to_Matrix()[i] for i in range(4) }
derivatives = derive_recursively(expressions, {}, { symbols('res'): diff(poseQuat_s, incRot_c[0]) })
print(derivatives)
elements = cse(list(expressions.values()) + list(derivatives.values()))
pprint(elements)
Try this!
RawRequest: "*\+*" or RawRequest:*\+*
The speedup is insignificant because you only sped up an insignificant part of the overall work. Most time is spent by the primes[...] = False commands, and they're the same for both wheels.
Official Microsoft documentation:
https://learn.microsoft.com/en-us/nuget/reference/nuget-exe-cli-reference?tabs=windows
tengo este codigo pero no me quere dar se me queda en abtascado
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main()
{
int[] LICEO = Enumerable.Range(4, 15).ToArray();
List<int> IUSH = Enumerable.Range(18, 43).ToList();
Console.WriteLine("Ingrese las edades de RONDALLA separadas por punto y coma (;):");
ArrayList RONDALLA = new ArrayList(Console.ReadLine().Split(';').Select(double.Parse).ToArray());
Console.WriteLine("LICEO : " + string.Join(", ", LICEO));
Console.WriteLine("IUSH : " + string.Join(", ", IUSH));
Console.WriteLine("RONALLA : " + string.Join(", ", RONDALLA.ToArray()));
int diferencia = (int)RONDALLA.Cast<int>().Max() - LICEO.Min();
Console.WriteLine($"Diferencia entre la edad mayor de RONDALLA y la menor del LICEO es: {diferencia}");
int sumaIUSH = IUSH.Sum();
double promedioIush = IUSH.Average();
Console.WriteLine($"La sumatira de las edades de IUSH es: {sumaIUSH}");
Console.WriteLine($"El promedio de las edades de IUSH es: {promedioIush}");
Console.WriteLine("Ingrese la edad que sea buscar del LICEO:");
int edadBuscadaLICEO = int.Parse(Console.ReadLine());
int posicionLICEO = Array.IndexOf(LICEO, edadBuscadaLICEO);
if (posicionLICEO != -1)
{
Console.WriteLine($"La edad {edadBuscadaLICEO} existe en la posición {posicionLICEO}.");
Console.WriteLine($"La edad en IUSH en la misma posición: {(posicionLICEO < IUSH.Count ? IUSH[posicionLICEO].ToString() : "N/A")}");
Console.WriteLine($"La edad en RONDALLA en la misma posición: {(posicionLICEO < RONDALLA.Count ? RONDALLA[posicionLICEO].ToString() : "N/A")}");
}
else
{
Console.WriteLine($"La edad {edadBuscadaLICEO} no existe en el LICEO.");
}
List<int> SALAZAR = LICEO.Concat(IUSH).ToList();
Console.WriteLine("Edades de SALAZAR: " + string.Join(", ", SALAZAR));
SALAZAR.Sort();
SALAZAR.Reverse();
Console.WriteLine("5 edades más altas de SALAZAR: " + string.Join(", ", SALAZAR.Take(5)));
Console.WriteLine("5 edades más bajas de SALAZAR: " + string.Join(", ", SALAZAR.OrderBy(x => x).Take(5)));
int[] edadesEntre15y25 = SALAZAR.Where(edad => edad >= 15 && edad <= 25).ToArray();
int cantidad = edadesEntre15y25.Length;
double porcentaje = (double)cantidad / SALAZAR.Count * 100;
Console.WriteLine($"Cantidad de edades entre 15 y 25 años: {cantidad}");
Console.WriteLine($"Porcentaje de edades entre 15 y 25 años: {porcentaje:F2}%");
}
}
Well, install works:
winget install --id Microsoft.Powershell
But the MS documentation says my original command should have worked. Frustrating.
Azure Database - I'm including here SQL Database, SQL Elastic Pool and MySQL Flexible Server - scaling cannot be performed real-time because it has a downtime. It can range from a few seconds, to a few hours depending on the size of your workload (Microsof Expresses this downtime in terms of "minutes per GB" in some of their articles).
See this post from 2017 where they describe downtimes of up to 6hours with ~250GB databases:
How do you automatically scale-up and down a single Azure database?
You probably know where I'm trying to get here. You automatically scale-up and down on your own. You need to either build your own tools or do it manually. There is no built-in support for this (and with reason).
I have to say that lately for Azure Sql Pools we are seeing extremely fast tier scaling (i.e < 1 min) with databases in the range of 100-200GB, so probaly the Azure team has come great lengths to improve changing tiers since 2017...
For MySQL FLexible Server I've seen it's almost never < than 4-5 minutes, even for small servers. But this is a very new service, I am sure it will get better with time.
The fact that you have this downtime is probably why Azure did not add out of the box autoscaling, providing users metrics and API's so they can choose when and how to scale according to their business needs and applications. Again, depending on your bussiness case and workload, those downtimes might be tolerable if properly handled (at specific times of the day, etc.)
I.e. for our development and staging environments we are using this (disclaimer, I built it):
https://github.com/david-garcia-garcia/azureautoscalerapp
and have setup rules that cater to our staging environment needs: pool scales automaticaly between 20DTU and 800DTU according to real usage. DTU's are scaled to a minimum of 50 between 6:00 and 18:00 to reduce disruption. Provisioned storage also scales and downscales automatically (in the staging pools we get databases added and removed automatically all the time, some are small, others several hundred GB's).
It does have a downtime, but it is so small, that properly educating our QA team allowed us to cut more than in half our MSSQL costs.
- Resources:
myserver_pools:
ResourceId: "/subscriptions/xxx/resourceGroups/mygroup/providers/Microsoft.Sql/servers/myserver/pool/{.*}"
Frequency: 5m
ScalingConfigurations:
Baseline:
ScaleDownLockWindowMinutes: 50
ScaleUpAllowWindowMinutes: 50
Metrics:
dtu_consumption_percent:
Name: dtu_consumption_percent
Window: 00:05
storage_used:
Name: storage_used
Window: 00:05
TimeWindow:
Days: All
Months: All
StartTime: "00:00"
EndTime: "23:59"
TimeZone: Romance Standard Time
ScalingRules:
autoadjust:
ScalingStrategy: Autoadjust
Dimension: Dtu
ScaleUpCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(3).Average() > 85" # Average DTU > 85% for 3 minutes
ScaleDownCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(5).Average() < 60" # Average DTU < 60% for 5 minutes
ScaleUpTarget: "(data) => data.NextDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleDownTarget: "(data) => data.PreviousDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleUpCooldownSeconds: 180
ScaleDownCoolDownSeconds: 3600
DimensionValueMax: "800"
DimensionValueMin: "50"
TimeWindow:
Days: ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
Months: All
StartTime: "06:00"
EndTime: "17:00"
TimeZone: Romance Standard Time
ScalingRules:
# Warm up things for office hours
minimum_office_hours:
ScalingStrategy: Fixed
Dimension: Dtu
ScaleTarget: "(data) => (50).ToString()"
# Always have a 100Gb or 25% extra space, whatever is greater.
fixed:
ScalingStrategy: Fixed
Dimension: MaxDataBytes
ScaleTarget: "(data) => (Math.Max(data.Metrics[\"storage_used\"].Values.First().Average.Value + (100.1*1024*1024*1024), data.Metrics[\"storage_used\"].Values.First().Average.Value * 1.25)).ToString()"
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
In my case, it occurred only due to the wrong .jks file selection. After making sure the accurate key was selected, the issue disappeared, and the build was successful.
I am also working with a naturally unicode language and the easiest and sure fix as follows:
1-] Download a unicode supporting true type font. An example is OpenDyslexic-Regular.
Here is GitHub repository for it.
2-] You either work in that directory you downloaded .ttf while in Python, or give the complete path.
3-]
pdf = FPDF()
pdf.add_page()
pdf.add_font("OpenDyslexic-Regular", "", "./OpenDyslexic-Regular.ttf", uni=True)
pdf.set_font("OpenDyslexic-Regular", "",8)
pdf.multi_cell(0, 10, txt="çiöüşp alekrjgnnselrjgnaej")
pdf.output("##.pdf" )
from fpdf import FPDF
from datetime import datetime
# বর্তমান তারিখ
today = datetime.today().strftime("%d/%m/%Y")
# চিঠির বিষয়বস্তু
letter_content = f"""
প্রাপক:
মরহুম মোঃ আমিনুর ইসলাম-এর পরিবার/উত্তরাধিকারীগণ
গ্রাম: পাঁচগাছী, উপজেলা: কুড়িগ্রাম সদর, জেলা: কুড়িগ্রাম।
মোবাইল: 01712398384
বিষয়: ধার পরিশোধ সংক্রান্ত
মান্যবর,
আমি মোঃ আলআমিন সরকার, পিতা: মৃত- আবুল হোসেন, গ্রাম: পলাশবাড়ী, ডাকঘর: খলিলগঞ্জ, উপজেলা: কুড়িগ্রাম সদর, জেলা: কুড়িগ্রাম। ২০১৬ সাল থেকে মরহুম মোঃ আমিনুর ইসলাম এর সঙ্গে সুসম্পর্কে ছিলাম। আমাদের ব্যক্তিগত সম্পর্কের ভিত্তিতে, তিনি আমার নিকট মোট চার ধাপে ৫৮,০০০/- টাকা ধার গ্রহণ করেন। নিচে ধার নেওয়ার তারিখ ও পরিমাণ উল্লেখ করা হলো:
১। [তারিখ] - [টাকার পরিমাণ]
২। [তারিখ] - [টাকার পরিমাণ]
৩। [তারিখ] - [টাকার পরিমাণ]
৪। [তারিখ] - [টাকার পরিমাণ]
এই লেনদেনগুলো আমার ব্যক্তিগত নোটবুকে লিখিত রয়েছে এবং একটি বা একাধিক লেনদেনের সময় একজন সাক্ষী উপস্থিত ছিলেন।
মরহুমের হঠাৎ মৃত্যুতে আমি গভীরভাবে শোকাহত, কিন্তু এই আর্থিক বিষয়টি নিয়ে আমি বিপাকে পড়েছি।
আপনাদের কাছে বিনীত অনুরোধ, মরহুমের সম্পত্তির উত্তরাধিকারী হিসেবে এই দেনার বিষয়টি বিবেচনায় এনে তা পরিশোধের ব্যবস্থা গ্রহণ করবেন।
আপনাদের সদয় সহযোগিতা প্রত্যাশা করছি।
ইতি,
মোঃ আলআমিন সরকার
মোবাইল: 01740618771
তারিখ: {today}
Ah, I feel your frustration — industrial cameras can definitely be tricky to get working with libraries like EmguCV, especially when they rely on special SDKs or drivers. Let’s break it down and see how we can get things moving.
EmguCV (just like OpenCV, which it's based on) uses standard interfaces (like DirectShow on Windows, or V4L on Linux) to access cameras. So if your industrial camera:
Requires a proprietary SDK, or
Doesn't expose a DirectShow interface,
then EmguCV won’t be able to see or use it via the usual Capture or VideoCapture class.
Does your camera show up in regular webcam apps?
If it doesn’t show up in apps like Windows Camera or OBS, then it’s not available via DirectShow — meaning EmguCV can’t access it natively.
Check EmguCV camera index or path
If the camera does appear in regular apps, you can try:
csharp
CopyEdit
var capture = new VideoCapture(0); // Try index 1, 2, etc.
But again, if your camera uses its own SDK (like Basler’s Pylon, IDS, Daheng SDK, etc.), this won’t work.
Most industrial cameras provide their own .NET-compatible SDKs. Use that SDK to grab frames, then feed those images into EmguCV like so:
csharp
CopyEdit
// Assume you get a Bitmap or raw buffer from the SDKBitmap bitmap = GetFrameFromCameraSDK(); // Convert to EmguCV Mat Mat mat = bitmap.ToMat(); // or use CvInvoke.Imread if saving to disk temporarily // Now use EmguCV functions on mat
You’ll basically use the vendor SDK to acquire, and EmguCV to process.
If you're feeling ambitious and want to keep using EmguCV’s patterns, you could extend Capture or create a custom class to wrap your camera SDK, but that’s quite involved.
EmguCV doesn’t natively support cameras that require special SDKs.
Most industrial cameras do require their own SDKs to function.
Your best bet: Use the SDK to get frames, then convert those into Mat or Image<Bgr, Byte> for processing.
Is there a specific need to use the "product" Model? Maybe an abstract Model would do the trick in your case?
class products(models.Model): # COMM0N
product_name = models.CharField(max_length=100)
product_id = models.PositiveSmallIntegerField()
product_desc = models.CharField(max_length=512)
[. other śhared fields and functions]
class Meta:
abstract = True`
class shirt(product):
class Size(models.IntegerChoices):
S = 1, "SMALL"
M = 2, "MEDIUM"
L = 3, "LARGE"
# (...)
size = models.PositiveSmallIntegerField(
choices=Size.choices,
default=Size.S
product_tyoe = "shirt"
)
Mermaid.ink has been known to timeout when rendering graphs with very short node names like "A", particularly in Jupyter notebooks using langgraph. This appears to be a bug or parsing edge case in Mermaid.ink’s backend. Longer node names such as "start_node" or "chatbot" tend to work reliably and avoid the issue. Interestingly, the same Mermaid code usually renders fine in the Mermaid Live Editor, suggesting the problem is specific to Mermaid.ink’s API or langgraph’s integration. Workarounds include using longer node names, switching to Pyppeteer for local rendering, or running a local Mermaid server via Docker.
Boombastick,
I know it's been awhile since you asked but I recently discovered that there are some unanswered question on Stackoverflow regarding xeokit SDK. So, just in case that it might still be relevant and might be good to know for others, too, we always recommend the following steps when trying to tackle an issue:
In your particular case, it would be useful if you can reproduce the bug with one of the SDK or BIM Viewer examples from https://xeokit.github.io/xeokit-sdk/examples/index.html and then post it on the GitHub Issues. There usually there is someone to take care of bugs.
@ValidateIf and Inside It Handle the LogicInstead of layering multiple @ValidateIfs and validators, consolidate validation using a single custom @ValidateIf() for each conditional branch.
@ValidateIf((o) => o.transactionType === TransactionType.PAYMENT)
@IsNotEmpty({ message: 'Received amount is required for PAYMENT transactions' })
@IsNumber()
receivedAmount?: number;
@ValidateIf((o) => o.transactionType === TransactionType.SALE && o.receivedAmount !== undefined)
@IsEmpty({ message: 'Received amount should not be provided for SALE transactions' })
receivedAmount?: number;
Create a custom validator for receivedAmount:
import {
registerDecorator,
ValidationOptions,
ValidationArguments,
} from 'class-validator';
export function IsValidReceivedAmount(validationOptions?: ValidationOptions) {
return function (object: any, propertyName: string) {
registerDecorator({
name: 'isValidReceivedAmount',
target: object.constructor,
propertyName: propertyName,
options: validationOptions,
validator: {
validate(value: any, args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'PAYMENT') {
return typeof value === 'number' && value !== null;
} else if (obj.transactionType === 'SALE') {
return value === undefined || value === null;
}
return true;
},
defaultMessage(args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'SALE') {
return 'receivedAmount should not be provided for SALE transactions';
}
if (obj.transactionType === 'PAYMENT') {
return 'receivedAmount is required for PAYMENT transactions';
}
return 'Invalid value for receivedAmount';
},
},
});
};
}
Very late to the game, but got a nice workaroud.
Add a container element into the rack, and then add your smaller equipment into it.
This way it will work with other rack elements, but won't be resized.
It seems the answer is to leave out the cross compilation arguments.
export CFLAGS="-arch x86_64"
./configure --enable-shared
Configure gets confused about cross compilation on macOS, because when it tries to execute a cross compiled program, the program does not fail (thanks to Rosetta, I presume).
You should always do your due diligence when adding a new package to your codebase, at the end of the day it is 3rd party code.
I think your main worry is your credentials being exposed. This package in particular seems to be popular enough to be battle tested and trusted by a good chunk of the community.
I think you'll be fine. Just remember to keep your credentials a secret and that means not adding them to version control. Use env variables or any of the other methods listed here to set your credentials.
Agrega esto a tu functions.php
function ninja_table_por_post_id() {
$post_id = get_the_ID();
$shortcode = '[ninja_tables id="446" search=0 filter="' . $post_id . '" filter_column="Filter4" columns="name,address,city,website,facebook"]';
return do_shortcode($shortcode);
}
add_shortcode('tabla_post_actual', 'ninja_table_por_post_id');
Este código crea un nuevo shortcode llamado [tabla_post_actual], que al insertarlo en cualquier plantilla o contenido de WordPress, mostrará la tabla filtrada por el ID del post actual.
Modo de uso:
<?php echo do_shortcode('[tabla_post_actual]'); ?>
For all modern versions of the sdk, it's just dotnet fsi myfile.fsx
Based on the available information, Babu89BD appears to be a web-based platform, likely associated with online gaming or betting services. However, the site https://babu89bd.app provides very limited public information about what the app actually does, how to use it, or whether it's secure and legitimate.
If you're trying to figure out its purpose:
It seems to require login access before showing any details, which may be a red flag.
The site doesn't list a privacy policy, terms of service, or contact information—important factors for trust and transparency.
The design and naming resemble other platforms often used for online gambling, especially popular in South Asia.
Caution is advised if you're unsure about the legitimacy. Avoid entering personal or financial information until you can verify its credibility.
If anyone has used this app and can confirm its features or authenticity, please share your insights.
From what I could find on the JetBrains website, you can disable it by including an empty file named .noai in IntelliJ.
This worked for me, so it should hopefully work for you as well.
Solo ejecuta el comando:
composer remove laravel/jetstream
I am not sure who these anchor boxes are coming from?
Are these defined during the model training process?
I am doing custom object detection using mediapipe model_maker but it creates a model that has two outputs. The outputs are of Shape=[ 1 27621 4] and Shape=[ 1 27621 3].
I am totally confused on what is going on and how can I get the four outputs? I want output locations, classes, scores, detections.
Following is my current code, please help me understand what's going on and how to obtain the desired outputs?
# Set up the model
quantization_config = quantization.QuantizationConfig.for_float16()
spec = object_detector.SupportedModels.MOBILENET_MULTI_AVG
hparams = object_detector.HParams(export_dir='exported_model', epochs=30)
options = object_detector.ObjectDetectorOptions(
supported_model=spec,
hparams=hparams
)
# Run retraining
model = object_detector.ObjectDetector.create(
train_data=train_data,
validation_data=validation_data,
options=options)
# Evaluate the model
loss, coco_metrics = model.evaluate(validation_data, batch_size=4)
print(f"Validation loss: {loss}")
print(f"Validation coco metrics: {coco_metrics}")
# Save the model
model.export_model(model_name=regular_output_model_name)
model.export_model(model_name=fp16_output_model_name, quantization_config=quantization_config)
Using %d with sscanf can cause a problem, because %d can be 2 bytes or 4 bytes.
It is said in earlier days %d used to be 2 bytes, but in more modern environments %d became 4 bytes. To be certain it is 2 bytes, replace %d with %hu or %hd or %hi
Came back to this question years later to offer an update.
Laravel 12 has a new feature called Automatic Eager Loading, which fixed this issue of eager loading in recursive relationships for me.
https://laravel.com/docs/12.x/eloquent-relationships#automatic-eager-loading
The command to install the stable version of PyTorch (2.7.0) with CUDA 12.8 using pip on Linux is:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Store Tenant Info Somewhere Dynamic Instead of putting all your tenant info (like issuer and audience) in appsettings.json, store it in a database or some other place that can be updated while the app is running. This way, when a new tenant is added, you don’t need to restart the app
Figure Out Which Tenant is Making the Request When a request comes in, figure out which tenant it belongs to. You can do this by:
Checking a custom header (e.g., X-Tenant-Id)
Looking at the domain they’re using
Or even grabbing the tenant ID from a claim inside the JWT token
Validate the Token Dynamically Use something called JwtBearerEvents to customize how tokens are validated. This lets you check the tenant info on the fly for each request. Here’s how it works:
When a request comes in, grab the tenant ID
Look up the tenant’s settings (issuer, audience, etc.) from your database or wherever you’re storing it
Validate the token using those settings
This could be helpful: https://github.com/mikhailpetrusheuski/multi-tenant-keycloak and this blog post: https://medium.com/@mikhail.petrusheuski/multi-tenant-net-applications-with-keycloak-realms-my-hands-on-approach-e58e7e28e6a3
Shoutout to Mikhail Petrusheuski for the source code and detailed explanation!
Not sure if anyone is monitoring thread but better late than never. We have launched a new unified Gitops controller for ECS (EC2 and Fargate) and Lambda. EKS is also coming soon. Check it out and would to engage on this - https://gitmoxi.io
I had same problem and resolve it by adding .python in between tensorflow an keras. so instead of tensorflow.keras, I wrote : tensorflow.python.keras
Adding GeneratedPluginRegistrant.registerWith(flutterEngine); to MainActivity.kt did work for me.
import io.flutter.plugins.GeneratedPluginRegistrant
//...
override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
GeneratedPluginRegistrant.registerWith(flutterEngine);
configureChannels(flutterEngine)
}
Sousce:
https://github.com/firebase/flutterfire/issues/9113#issuecomment-1188429009
i ignored this in my case since it use a flash or delay before the page loads if i used the async await.
If you change the mocking method and then you cast, you could avoid the ignore comment:
jest.spyOn(Auth, 'currentSession').mockReturnValue({
getIdToken: () => ({
getJwtToken: () => 'mock',
}),
} as unknown as Promise<CognitoUserSession>)
You're stuck in a loop because Google Maps APIs like Geocoding don't support INR (Indian Rupees) billing accounts.
Even if you're not in India, Google might still block the API if your billing account uses INR.
You need to manually create a new billing account using the Google Billing console, and specifically make sure:
The country is set to the U.S. (or another supported one)
The currency is set to USD
It's not created via the Maps API "Setup" flow, because that usually defaults to your local region/currency (e.g., INR)
Then create a new project and link this specific USD billing account manually. After linking the billing account, enable the Geocoding API within that project.
If the issue still persists, please share your setup in the billing account using the Google Billing console and configurations.
How do we get the UserID? I am trying to retrieve the user descriptor and passing that in the approval payload's request body it is returning the ID something like aad.JGUENDN....... but When I am trying to construct approvers payload it is returning me an invalid identities error.
I had same issue. I think [email protected] is not compatible with [email protected].
I ran npm install react-day-picker@latest and it's fixed.
Hope it helps.
In the project explorer, click the 3-dot settings button (︙) go to Behaviour -> Always Select Opened File.
Hi @Alvin Jhao I’ve implemented the monitor pipeline as advised using a Bash-based approach instead of PowerShell. The pipeline does the following:
✅ Fetches the build timeline using the Azure DevOps REST API
✅ Identifies stages that are failed, partiallySucceeded, or succeededWithIssues
✅ Constructs a retry payload for each stage and sends a PATCH request to retry it
✅ Verifies correct stage-to-job relationships via timeline structure
Here’s where I’m stuck now:
Although my pipeline correctly:
Authenticates with $(System.AccessToken)
Targets the correct stageId and buildId
Sends the payload:
`
{
"state": "retry",
"forceRetryAllJobs": true,
"retryDependencies": true
}
`
I consistently receive: ` Retry failed for: <StageName> (HTTP 204) `
Oddly, this used to work for stages like PublishArtifact in earlier runs, but no longer does — even with identical logic.
Service connection has Queue builds permission (confirmed in Project Settings)
Target builds are fully completed
Timeline output shows the stage identifier is present and correct
The payload matches Microsoft’s REST API spec
Even test pipelines with result: failed stages return 204
Are there specific reasons a stage becomes non-retryable (beyond completion)?
Could stage identifier fields being null (sometimes seen) block the retry?
Is there a way to programmatically verify retry eligibility before sending the PATCH?
Any help or insights would be appreciated!
Open Xcode, select the Pods target, then delete React-Core-React-Core_privacy and RCT-Folly-RCT-Folly_privacy, and try again — that should fix it.
I had the same issue on a gradle project and I was able to resolve it by following the instructions given in this link: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-project-gradle.html
Using %d with sscanf can cause a problem, because %d can be 2 bytes or 4 bytes.
It is said in earlier times %d used to be 2 bytes, but in more modern environments %d became 4 bytes. To be certain it is 2 bytes, replace %d with %hu or %hd or %hi
I too faced the same issue, the problem is that the file is not saved. try on turning on AUTOSAVE or use CTRL + S.
could you please share, how did you end up resolving this? i am having same problem 7 years later
Default of nestjs for websocket connection is Socket.io, so if you want to connect your websocket server, your client must use Socket.io to connect it and if you want to connect with normal websocket you will get this error:
Error: socket hang up
uninstall node_modules folder, upgrade node to the leatest version and run npm install again
Is it possible to create a field (control?) to search in the logs (from within the dashboard)
how does Ambari manage components such as Apache Hadoop, Hive, and Spark?
Ambari now uses Apache Bigtop: https://bigtop.apache.org/
Bigtop is similar to HDP.
Can Ambari directly manage existing Hadoop clusters?
How do I get Ambari to manage and monitor my open source cluster? I already have data on my current Hadoop cluster and don't want to rebuild a new cluster.
Ambari can do this, but it's not an easy process. Much easier to deploy a Bigtop cluster from scratch using Ambari.
Using Ambari on top of an existing cluster requires creating Ambari Blueprints to post cluster info and configurations to Ambari Server. Some details here: https://www.adaltas.com/en/2018/01/17/ambari-how-to-blueprint/
In case you are using other functions like store, storePublicly, etc.
$cover->store($coverAsset->dir, 'public');
$cover->storePublicly($coverAsset->dir, 'public');
Look for my implementation of FlowHStack (fully back ported to iOS 13): github.com/c-villain/Flow , video demo is here: t.me/swiftui_dev/289
Look for my implementation of FlowHStack: github.com/c-villain/Flow
video demo is here: t.me/swiftui_dev/289
Look for my implementation of FlowHStack: Look here: github.com/c-villain/Flow
video demo is here: t.me/swiftui_dev/289
class Vision: Detector {
typealias ViewType = Text
func check(mark: some Mark) -> ViewType {
Text("Vision")
}
}
The above answer is correct. For more details, https://tailwindcss.com/docs/dark-mode use this link.
Wo mans use 48:Notes this is the soluction
Chrome no longer show this information directly so you'll have to use a more complicated method, but that requires no external tools :
- Enter this adresse in you url bar :
edge://net-export/
- Click on Start Logging to Disk, and chose a temporary location for the log file that will be generated
- In a separate chrome windows or tab, load the url of a site that you want to be able to access using the proxy settings
https://stackoverflow.com/ for exemple
- Go back to the first window/tab and click on Stop Logging
- Click on Show File
- The export file is shown and selected. Open it with a text editor
- Search for the string : "proxy_info":"PROXY inside this file
- The content of the line will show you the proxy parameters you need to use :
{"params":{"proxy_info":"PROXY proxy1.xxxx.xxx:8080;PROXY proxy2.xxxx.xxx:8080"},"phase":0,"source":{"id":2006431,"start_time":"1018634551","type":30},"time":"1018634572","type":32},
- In this example, there's 2 proxy available ; one with name proxy1.xxxx.xxx and port 8080, and the other with name proxy2.xxxx.xxx and also port 8080.
you need to user v5 version. it will support then
I have thoughts on this that exceeded the character limit for SO. So I posted on Dev.to <- completely independant blogger myself; totally unaffiliated.
GOTO IMHO is a very slept-on keyword; and I use it heavily when doing row-by-agonizing-row (RBAR) operations in T-SQL. Stack Overflow might not be the place for long form answers, but given how nuanced SQL dialects differ and where TSQL lives in the spectrum and what to do about it, I was left with deciding be be concise or complete. I went with complete. <- due largely to the fact the OP and others coming here might want something thoughtful.
Why T-SQL's Most Misunderstood Keyword is Actually It's Safest ~ David Midlo (me/Hunnicuttt)
As I found out, brace expansion happens before variable expansion; you can accomplish the same objective by replacing for i in {${octet[0]:-0}..255} with for ((i=${octet[0]:-0}; i<=255; i++)) (and do the same for j, k and l) – @markp-fuso