I'm one of Indago developers. Glad to hear you are using it.
Currently Indago only supports continuous optimization (continuous variables). However, your rounding workaround is legit and not so far from Discrete PSO method. However, methods like GA are more suitable for discrete optimization since it is intrinsic to their fundamental idea.
We plan to implement some methods and support discrete or even mixed problems in the future. But not in the near future :)
In Opera, having Block ads
or Block Trackers
enabled:
Will use `isolated-first.jst`. The file appears to come from: https://gitlab.com/eyeo/anti-cv/snippets/-/blob/main/dist/isolated-first.jst
If you don't want it to be included, it shouldn't be executed anymore if you disable "Block ads" and "Block trackers". Hope this helps!
know this is an old thread, but... PeopleSoft has had and still has an integration with DocuSign. for both SOAP-based Service and a REST-based Service.
به یاد من
به یاد بغز های تو گلوم
به یاد تو
به یاد درد های بی امون
به یاد همه روز های گزاشت رفت
به یاد جوونی له شده ت گزشتم
به یاد دفترا
به یاد رفتنا
به یاد قدم های محکوم به قه قرا
به یاد مادری که گناهش مادریه
به یاد پدری که جون داد تا یه تیکه نون بده
به یاد چشماش که خشکه پادریه
به یاد پدر و سفره های ممتد
به یاد دستای خالی و غصه های پر درد
به یاد ساز بی صدا اشنای بی نگاه
به یاد فروش نفس تو بازار ریا
به یاد بچه های سرخ به سیلی سیاه
I was wondering the same and resolved by closing all open tabs and starting over. Seems we'd done something to trigger shortcut bar showing up. I couldn't find out what I'd done, but read this in their documentation and figured it had something to do with what I had open:
Shortcut bars host shortcuts of views and editors and appear if at least one view or editor is minimized, otherwise, they are hidden.
https://dbeaver.com/docs/dbeaver/Application-Window-Overview/#shortcut-bar
docker build -t nginx:stable .
I suppose you should choose a different tag name for the resulting image; the one you chose conflicts with the base image you're using.
Try something like docker build --tag myown-ngingx:stable .
TL;DR: I have no full solution, even now 8 years after question was asked. But you can find a browser plug-in that does this for you.
Trafitional web browsers for desktop just do what you requested: when changing the zoom level, they rewrap (re-flow) displayed text to be readable without scrolling horizontally, which is the most accessible way you could consume text.
However, with some developments this native feature got more and more forgotten:
My opinion is that it's a shame there is still no (satisfying) answer to this (that I know of), because I think that zooming and constant horizontal scrolling is a severe accessibility issue of many webpages and e-mails when consumed with limited sight on mobile devices.
Some web designers respond to this lack of accessibility by placing buttons to render their web page in several font sizes (as also explained in Bob's comment), but that option is often not found.
However, the good news is that for some mobile web browser there are browser plugins that do what you need. Not being in browser core, however, makes them less stable.
One example is the Firefox / Fennec plugin: Text reflow (on mobile) it works well for me on most web pages that are not too strictly styled.
You might need a different plugin for your web browser of choice (or any other app displaying HTML content, like e-mail app)
Any idea why this does not work for me? The code is added exactly as shown. I'm on dawn 15.3.0!
🚀 CRM CLASSIC – The Ultimate FREE Business Management Solution!
✅ 100% FREE for Small & Medium Businesses
✅ Boost Efficiency, Simplify Operations & Increase Revenue
✅ Complete CRM/ERP Suite – Sales, Invoicing, Leads, Customer Management, Accounting, HR, and More!
✅ Trusted by 7,000+ Companies in 30 Countries
Take control of your business with powerful tools—all at no cost! Join now and experience seamless management with CRM CLASSIC.
📢 Sign up today—it’s FREE!
Visit our website https://www.crmclassic.com/
I have the same issue and not matter what I use the Print is not working with the same error every time. Is there another way to get text on the output besides Print?
run the following commands in your terminal: npm i [email protected] npm drizzle-kit push npm drizzle-kit studio
Vidstack provides the CSS hack (height: 1.000%) to avoid the YouTube branding resulting in the excessive height of portrait (9/16) videos if the player itself has a 16/9 layout.
For me adding 2 lines of CSS does the job:
iframe.vds-youtube[data-no-controls] {
height: 1000%;
}
by adding a matching width to this, the video is shown as expected and a margin fix places the video in the center of the player.
iframe.vds-youtube[data-no-controls] {
height: 1000%;
width: calc(100% / 16 * 9 * 9 / 16);
margin: 0 auto;
}
This does the job. But still I need to apply this manually, which can be a real pain.
Does anyone know a solution to automatically detect if we handle a portrait or landscape video from YouTube?
I have a different issue with the same code.
I use FX5u, I seem to be able to connect to the PLC, however the code always returns 0 as results or reading any of the registers or inputs. Any ideas on what this could be driven by?
Thanks!
I managed to solve my problem.
I had to add the following line to the uWSGI configuration file:
virtualenv = "/var/www/folder/folder_env";
I do not know, how could I run my flask website, as I installed Flask merely within the virtualenv, but adding the line above solved my problem.
Activate the offending environment then:
conda update --all
worked for me after updating to Mac OS 15.4.1
NextJS also has a build in capability for PWA including push notifications https://nextjs.org/docs/app/building-your-application/configuring/progressive-web-apps
Since you already have a method to find the largest rectangle I want to build ontop of this solution. I will however start my suggestion by working out the largest inner square for simplicity. Extending the approach to rectangles comes with a caveat that may or may not be relevant to your data.
Lets say I have a perfect square rotated by an unknown angle, with length L, and I run my own (axis aligned) square finder. This initial sollution L0, based on no initial angle (t0=0), already provides a lot of information about all other possible sollutions. Namely, if my perfect square is rotated by 45 degrees, then I will have found a square with length L0=cos(45) * L. Inversely, if the input square was not rotated, then any other angle t1 I try in the next iteration (0 to 45 degrees because of symmetry) will return L1=L/cos(t1).
The lower bound is less important, but may be used to speed up future searches. The upperbound is what is most important, as it tells where any higher maximum may be. Compared to the function call to locate the largest square, it is hardly any work to pick the next best angle to continue our search. By examining the highest possible solution, we will again exclude many possible solutions.
This strategy is apparently similar to Shubert-Piyavskii optimization (came up when brainstorming with Gemini). Just using t0=0 and t1=45 already guarantees the solution is at most 10% larger. In cases where the data is presumed to resemble a square, this will be a very effective strategy. A perfect circle would be the most ineffective since infinitely many angles would have to be checked.
Now getting back to your problem of rectangles a similar strategy can be used. After finding a rectangle, you can compute lower and upper bounds for other angles. Working out the largest rotated rectangle within another is not too difficult. How much it tells you about all other solutions depends on the ratio between height and width.
It is a bit harder to predict the convergence here. If the actual rectangle is really elongated, then your initial guess will likely have its longest diagonal lie parallel to the shortest sides. In the next attempt you will likely find it. If the rectangle has similar length sides then the behaviour will be similar to the square example described above.
The methods above can be disrupted by annoying data thrown at them. The reasoning behind them is that, once you get close to your solution, they are likely inside the actual solution. If your input resembles a snowflake then good luck.This is just an argument to motivate a maximum number of iterations and possibly a tolerance value. Perhaps it is also a good idea to include the largest inscribed circle for some edge cases.
One very late thought, in case others pick up this thread, is that if you look at what is probably one of the better in-production/non-trivial frameworks is the JBoss Drools + Governor + jBPM + whatever their optimization framework is called + their complex event processing framework. IBM had a really strong competitor with its commercial system, but them, um bought JBoss. So long as they remain a "good corporate citizen" (which is likely, as the open source approach has been rather profitable for Microsoft, Amazon, Google, maybe even Oracle?) then you could just use their APIs. You likely wont be doing a lot of custom Java programming for rules, but if you learn the Drools language isn't not so bad, and their BPM tooling, when I last played with it was pretty decent (aesthetics of Eclipse on a KDE desktop not withstanding...).
The biggest barrier is you really do need someone who knows their way around the JBoss server infrastructure. They have some very good software, but it isn't like you are going to master it over a weekend.
These are all essential parts of a production system: Rules engine with Rule management (so you don't end up with conflicting rules), probably with some sort of master data management so the rules and the rest of the framework all understand the data the same. Not much help if different departments insist on their own narrowly focused view of the data so rules run fine on tasks completed by some, but not all..
You need a way to turn task assignments into changes in the state machine of tasks and/or value of data objects so you can trigger whatever happens next.
Because the real world never does what it is supposed to, having a complex event processor lets you define time-based, value-based, and event-based rules that work with stateful data. In a lot of enterprises, this translates into integration with email, schedule/calendars, and at a minimum the sort of task/todo that would be supported by a reasonably robust to-do list management service or end-user applications (including Outlook, KDE Akonadi, GNOME's Evolution, and server side options are supported by a variety of open source groupware solutions).
If your tasks are all done by robots or vending machines, then you can skip the groupware!
It also requires a fairly sophisticated (but the algorithms are well-defined and there are plenty of open source implementations to interface with) optimiztion engine. As things gets completed (and never in the order the workflow designer intended, and conflicting priorities constantly emerge, you want to be able to redo an optimized schedule, possibly in real time (imagine you are usng this to run a large busy restruant--food gets prepared as close as possible to syndronized but as patrons leave before their food arives, and some VIP shows up, and suddenly the cook drops a block of ice by accident in the deep fryer, etc. etc. you need to scramble to reorder when things are ready to be picked up from the kitchen and bar and delivered to the tables as diners are given the options to substitute somethign for their french fries or put up with a long delay while the paramedics take the fry cook to the hospital and the defryer gets cleaned out and heated back up. Hey, if things worked by plan we would not need these sort of systems!)
Was able to fix my app by adding
include ':app'
includeBuild('../node_modules/@react-native/gradle-plugin')
{/* Error display */}
{error && (
<div className="error-message">
<p>{error}</p>
{error.includes('index') && (
<>
<p>This query requires a Firestore index.</p>
<a
href={`https://console.firebase.google.com/v1/r/project/${YOUR_PROJECT_ID}/firestore/indexes`}
target="_blank"
rel="noopener noreferrer"
className="index-link"
>
Click here to create the required index
</a>
</>
)}
</div>
)}enter image description here
I forced to login using a new user by passing this parameter `-ContextScope Process` and i got a new MFA, then i run the script and it worked.. not sure what was going on? is it generating new MFA a reason or it was about time , to get this worked.. but now it is working
In response to the last answer (not able to comment yet): The later versions of Android do not allow for a full screen size image, the image will appear in the middle of the screen with a background colour of your choosing. You can use a png if you like to make the image look nicer with colour and detail, but unfortunately you cannot have an image occupy the entire page. I find it's really buggy/hit and miss when making these changes also getting the compiler to recognise the changes.
Go for a company logo centre screen with your companies branding colour as your background.
Title: Solved the issue with the CTkOptionMenu click behavior
Hello,
I’ve solved the issue mentioned in this thread about the CTkOptionMenu not hiding upon a second click. I’ve created a fork of the project where this issue has been fixed. You can check the updated version in my repository here:
Feel free to take a look or integrate the changes if they are useful!
Best regards, WipoDev
Just had a need for this myself:
{
"extend": "selectAll",
"text": "Select All",
"action": function (e, dt, node, config) {
e.preventDefault();
dt.rows().deselect();
dt.rows({ search: 'applied' }).select();
}
}
You can test each of the attributes in a flake’s outputs’s formatter attribute set by using nix run
:
nix --extra-experimental-features 'nix-command flakes' run '.#formatter.i686-linux' flake.nix
nix --extra-experimental-features 'nix-command flakes' run '.#formatter.x86_64-linux' flake.nix
Do not delete your .metadata folder 'cause it will delete everything. If you have a lot of configuration and works done, it will be a nightmare to recover them
The issue is that your RequestEEFormData function uses $.ajax, but it does not return a promise that resolves after container.innerHTML is updated. As a result, loadScript runs immediately after calling RequestEEFormData, before the DOM is actually updated. To fix this, you should wrap your AJAX request inside a promise and resolve it after setting container.innerHTML. That way, you can guarantee that the new HTML is rendered before loading and running your script. Right now, your then() is not really waiting for the DOM update, which is why querySelector returns null at first.
This is exactly why we are building Defang . With Defang, you can go directly from Docker Compose to a secure and scalable deployment on your favorite cloud (GCP / AWS / DigitalOcean) in a single "defang compose up" command. Networking, compute, storage, LLMs, GPUs - all supported. Take a look and give us your feedback - https://defang.io/
I finally found a solution that worked for me!
just follow these steps:
List the process using port 8081:
lsof -i :8081
Kill the process using its PID:
kill -9 (PID)
Example:
kill -9 19453
For me, the simplest method is to use the slice()
function from the String JS class.
const str = 'This is a veeeeeery looooong striiiiing';
console.log(str.slice(0, 7) + "\u2026"); // Output: "This is..."
You need to install wasm tools, just run
Dotnet workload install wasm-tools
This should fix it.
I am using curl with the -n switch. I have validated the login information in my .netrc file (been using .netrc forever).
I always get "* Basic authentication problem, ignoring."
What gives??
Selenium isn’t able to detect or interact with system-level popups like the "Save As" dialog because it only operates within the browser’s DOM. These popups are actually controlled by the operating system, not the browser, which puts them outside Selenium’s reach. To work around this, people often turn to external tools like AutoIt, WinAppDriver, or PyAutoGUI. A better solution, tho, is to configure the browser to automatically download files by adjusting its settings to skip the prompt altogether. Another reliable and often more efficient option is to download the file directly using its URL with HttpClient in C#.
When you open QuickWatch and it is off the visible screen, hit ALT-SPACE, then 'M', then any arrow. After that, if you move your cursor, the QuickWatch window will follow it. You can drag it back from whatever unviewable dimension it was in before. This sequence works for any window that has focus. You can practice on a visible window before you're forced to use it on an invisible one.
Where was this entered to have it sort properly?
Inspired from @Aia Ashraf's answer:
import 'dart:ui' as ui;
Future<ui.Image> svgToImage(String assetPath, {int width = 100, int height = 100}) async {
final pictureInfo = await vg.loadPicture(SvgAssetLoader(assetPath), null);
final image = await pictureInfo.picture.toImage(50, 50);
return image;
}
Then apply it to your paint like this:
final off = Offset(width, height);
canvas.drawImage(image!, off, solidFill);
Below worked for me
auth_manager=airflow.providers.fab.auth_manager.fab_auth_manager.FabAuthManager
it seams your divider has no width, you can add it to the Container:
SizedBox(
height: 10,
child: Center(
child: Container(
margin: EdgeInsetsDirectional.only(start: 1.0, end: 1.0),
height: 5.0,
width: double.infinity,
color: Colors.red,
),
),
),
Using the google.accounts.id.prompt callback handler to take action is one good option. You may decide to conditionally display a dialog if a credential is returned or enter a different auth flow if not. isDisplayed
is not supported by FedCM, so info on whether the model is shown to the user is not available to web apps. You'll want to remove any code that currently depends upon isDisplayed
state.
You cannot do this. The answer above is incorrect, at least on Quartz version 3.8.0.0. Having two quartz espressions in one just ignores all of the additional second line.
To completely decouple the yaml file from your flow, you can create a separate flow that is triggered by a scheduler, e.g. every hour to refresh the file. inside the scheduled flow, you write the content of the yaml file into a non-persistent Object Store.
Whenever you need to access the file from any flow, you can simply load it from the Object Store by it's defined key.
There is now a BigDecimal package: https://pub.dev/documentation/big_decimal/latest/big_decimal/BigDecimal-class.html
Snowflake lacks integrated reporting capabilities because it is a data warehouse. To generate reports, you'll need to use a business intelligence tool like Excel, Tableau, or Power BI
If you're running into the "SessionNotCreatedException: user data directory is already in use" error with Selenium Edge, try setting up a fresh, temporary user data directory for each session or just leave out the --user-data-dir argument altogether. Running Edge in headless or incognito mode can also help prevent profile conflicts. Make sure you always close the driver properly with driver.quit() after saving your table. If you're using Great-Tables, it's a good idea to pass in a custom Selenium driver that's set up this way. And don’t forget to double check that your Edge, EdgeDriver, and Selenium versions all match up.
You're able to install https://github.com/davidodenwald/prettier-plugin-jinja-template which will follow the same templating rules as django-html in most cases.
I am getting the same error.. I have attached GitHub repo link here can anyone here help me out on this ??
Thank you so much
I was pulling hair out and OpenAI, Gemini, Grok, and Claude all failed to give this resolution.
They were all pretty good, but leaning towards a bug with Apple.
A standard PayPal Subscriptions integration with the JS SDK will open a modal/mini window for payment. You can generate a button at https://www.paypal.com/billing/plans to see the experience.
According to this Google Issue Tracker post, comment #3.
These issues are all related to underlying data on Google Finance and are not specific to Sheets or the
=GOOGLEFINANCE()
formula.
The meaning of this is that it's due to a data issue on Google's side. It seems the issue with missing data for some stock tickers in Google Sheets, such as HKG:2800, isn't due to an error in your formula or Google Sheets itself. The problem is likely that Google Finance, which provides stock data to Sheets, doesn't have reliable or up-to-date information for those specific tickers. Therefore, even if your formula is correct and works for other stocks, if Google Finance lacks valid or current data for a particular ticker, the formula cannot return any results. Therefore, the problem isn’t with Google Sheets or the formula syntax.
This another post contains a suggestion that addresses this issue. It mentions that "Please use the Send Feedback
button in Google Finance to report data issues. Please verify your ticker symbol. If Google Finance has the correct data, but =GOOGLEFINANCE()
formula is failing, please send feedback directly within the Google Sheets Application menu. Help > Report a problem"
Yes 1 unit operates on 1 warp in lockstep, but warps can be swapped with context switch. Obviosly, usually there are a lot more warps then warp schedulers, so they will also go sequentialy. In theory GPU can put threads in warp depending on what brach they go??? (idk but seems like a viable option, because constant branches are eliminated, and usually dont make any effect with modern compilers and GPUs)
I can not imagine device with different branches. It can be possible, but as long as you have smaller number of schedulers and bigger number of warps to process, that make only half of sense, because you switch the whole warp, and you will need to finish the longest branch then. Sure it will then make smaller latency, because you can execute both branches concurently, but still wont eliminate the other problem.
As long as you are doing the same operations in the same order in different branches, and just use different data, it should be okay and perform same instructions for all of threads without stalls or computing both variants.
The last thing Im generally curious about is that, can GPU architecture allow threads swaps in warps, then sure there will be even better possibilities in branches and whatever. Also dont take my whole statements as complete thruth, I also dont know that much, may be better to look at AMD, as they have more open(to look at) architecture.
Behind this motive relating to u for the reason prior take an example qwote just like mention forwarded as caesar wife should have to be suspiciOUS, whether such those kill himself or not by doubtful ends within complete also compete contious on wards by forwarding actions so on 💐🎂🙃😌🙂😁😊😔🌹🙄🥲👌😋🤣🤣🤣🥺🤔👍🫂
Why not define not()
yourself?
not <- function(x) !x
TRUE |> not()
Your best bet is to use Twscrape with several Twitter accounts and rotate the auth_tokens. This spreads out the load and helps you get around the 600-tweet-per-day limit tied to each token. Browser automation tools like Playwright or Selenium are also solid options since they mimic real user behavior, though they tend to be slower and a bit more complex to set up. Using proxies is a smart move too, cuz it helps mask your IP and lowers the chances of getting flagged. And if you're not into coding, Apify’s Twitter Scraper is a great plug-and-play option that handles pagination for you.
It is unclear as to why or how, but simply starting and stopping the Directory Services Authentication Scripts found here permanently resolved the issue. The issue will still manifest after the start-auth script runs; it's only remediated after the stop-auth script is executed. Now that the issue is resolved, I have no way to test or determine which specific command in the script is the key.
Sure:
await page.locator('input field locator').pressSequentially('text to enter', { delay: 100 }) //the delay is optional, can be adjusted to be slower or faster
Select the string key in the base language table and press the delete key on your keyboard.
Looks like removing the user and adding back to the DB solved the issue for us.
I think I got it to work by changing cancelTouch
function:
const cancelTouch = (e: TouchEvent) => e.cancelable && e.preventDefault();
I have just created a project fixing their code. I will publish it to Maven soon.
No compose is needed.
https://github.com/ronenfe/material-color-utilities-main
I am working with CATS and am having the same issue, does the same solution apply and if so which file would I need to edit.
For me resyncing the project with Gradle files helped. I didn't modify anything, because the code worked before I shut down the system. I am using the latest Android Studio (Meerkat | 2024.3.1 Patch 2), but it is still an issue.
To configure WebClient to use a specific DNS server or rely on the system's resolver, you'll need to customize the underlying HttpClient's resolver. If you want to stick with the system DNS, use DefaultAddressResolverGroup.INSTANCE, which follows your OS-level settings. To set a custom DNS server (like 8.8.8.8), create a DnsAddressResolverGroup using a DnsNameResolverBuilder and a SingletonDnsServerAddressStreamProvider.
If you're working with an HTTP proxy, make sure it's properly set up in HttpClient and supports HTTPS tunneling through the CONNECT method. In more locked-down environments, it's a good idea to combine proxy settings with the system resolver and enable wiretap logging for better reliability and easier debugging.
I had some annotations with normalized values above 1, like "bottomX": 1.00099
in my case: Apache 2.4 win64, PHP 8.1.1 win32 vs16 x64 the problem was solved by the following: copy libsalsl.dll from php to apache/bin
You cannot prevent the way that the Android system works. You need to handle your own session and state per the Designing and Developing Plugins guide.
I decided to just use Umbraco Cloud to host. I've recreated the site on there. The most likely issue was in my views I was referencing content ID's that were only on my local database. I noticed and resolved this in Umbraco Cloud which was cheaper for hosting anyway
Sounds like a bug described in this [github issue](https://github.com/Azure/azure-cli/issues/17179) where `download-batch` doesn't distinguish between blobs and folder entries. It lists everything in the container and then attempts to incorrectly download "config-0000" as a file, and it writes a file with this name to your destination dir. Then it does a similar thing with "config-0000/scripts", but "config-0000" is a file, and that's where the "Directory is expected" error message comes from.
A possible work around that might have worked for you is to specify a pattern that wouldn't match any of your folders in blob storage like: `--pattern *.json`.
So with the hint with the functions from @Davide_sd I made a generic method that allows me to pretty easily control how the sub-steps are split up. Basically, I'm manually deriving the functions I split off, but much like cse, keep the results in a dictionary to share among all occurences.
The base expression that make up the calculation are input and never modified, the derivation list is seeded with what you want to derive (multiple expressions are ok), and it will recursively derive them, using the expression list as required.
At the end, I can still use cse to a) bring it into that format should you require it, and b) factor out even more common occurences.
It works decently well with my small example, may update it as I add more complexity to the function I need derived.
from sympy import *
def find_derivatives(expression):
derivatives = []
if isinstance(expression, Derivative):
#print(expression)
derivatives.append(expression)
elif isinstance(expression, Basic):
for a in expression.args:
derivatives += find_derivatives(a)
elif isinstance(expression, MatrixBase):
for i in range(rows):
for j in range(cols):
derivatives += find_derivatives(self[i, j])
return derivatives
def derive_recursively(expression_list, derive_done, derive_todo):
newly_derived = {}
for s, e in derive_todo.items():
print("Handling derivatives in " + str(e))
derivatives = find_derivatives(e)
for d in derivatives:
if d in newly_derived:
#print("Found derivative " + str(d) + " in done list, already handled!")
continue
if d in derive_todo:
#print("Found derivative " + str(d) + " in todo list, already handling!")
continue
if d in expression_list:
#print("Found derivative " + str(d) + " in past list, already handled!")
continue
if d.expr in expression_list:
expression = expression_list[d.expr]
print(" Deriving " + str(d.expr) + " w.r.t. " + str(d.variables))
print(" Expression: " + str(expression))
derivative = Derivative(expression, *d.variable_count).doit().simplify()
print(" Derivative: " + str(derivative))
if derivative == 0:
e = e.subs(d, 0)
derive_todo[s] = e
print(" Replacing main expression with: " + str(e))
continue
newly_derived[d] = derivative
continue
print("Did NOT find base expression " + str(d.expr) + " in provided expression list!")
derive_done |= derive_todo
if len(newly_derived) == 0:
return derive_done
return derive_recursively(expression_list, derive_done, newly_derived)
incRot_c = symbols('aX aY aZ')
incRot_s = Matrix(3,1,incRot_c)
theta_s = Function("theta")(*incRot_c)
theta_e = sqrt((incRot_s.T @ incRot_s)[0,0])
incQuat_c = [ Function(f"i{i}")(*incRot_c) for i in "WXYZ" ]
incQuat_s = Quaternion(*incQuat_c)
incQuat_e = Quaternion.from_axis_angle(incRot_s/theta_s, theta_s*2)
baseQuat_c = symbols('qX qY qZ qW')
baseQuat_s = Quaternion(*baseQuat_c)
poseQuat_c = [ Function(f"p{i}")(*incRot_c, *baseQuat_c) for i in "WXYZ" ]
poseQuat_s = Quaternion(*poseQuat_c)
# Could also do it like this and in expressions just refer poseQuat_s to poseQuat_e, but output is less readable
#poseQuat_s = Function(f"pq")(*incRot_c, *baseQuat_c)
poseQuat_e = incQuat_s * baseQuat_s
expressions = { theta_s: theta_e } | \
{ incQuat_c[i]: incQuat_e.to_Matrix()[i] for i in range(4) } | \
{ poseQuat_c[i]: poseQuat_e.to_Matrix()[i] for i in range(4) }
derivatives = derive_recursively(expressions, {}, { symbols('res'): diff(poseQuat_s, incRot_c[0]) })
print(derivatives)
elements = cse(list(expressions.values()) + list(derivatives.values()))
pprint(elements)
Try this!
RawRequest: "*\+*" or RawRequest:*\+*
The speedup is insignificant because you only sped up an insignificant part of the overall work. Most time is spent by the primes[...] = False
commands, and they're the same for both wheels.
Official Microsoft documentation:
https://learn.microsoft.com/en-us/nuget/reference/nuget-exe-cli-reference?tabs=windows
tengo este codigo pero no me quere dar se me queda en abtascado
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main()
{
int[] LICEO = Enumerable.Range(4, 15).ToArray();
List<int> IUSH = Enumerable.Range(18, 43).ToList();
Console.WriteLine("Ingrese las edades de RONDALLA separadas por punto y coma (;):");
ArrayList RONDALLA = new ArrayList(Console.ReadLine().Split(';').Select(double.Parse).ToArray());
Console.WriteLine("LICEO : " + string.Join(", ", LICEO));
Console.WriteLine("IUSH : " + string.Join(", ", IUSH));
Console.WriteLine("RONALLA : " + string.Join(", ", RONDALLA.ToArray()));
int diferencia = (int)RONDALLA.Cast<int>().Max() - LICEO.Min();
Console.WriteLine($"Diferencia entre la edad mayor de RONDALLA y la menor del LICEO es: {diferencia}");
int sumaIUSH = IUSH.Sum();
double promedioIush = IUSH.Average();
Console.WriteLine($"La sumatira de las edades de IUSH es: {sumaIUSH}");
Console.WriteLine($"El promedio de las edades de IUSH es: {promedioIush}");
Console.WriteLine("Ingrese la edad que sea buscar del LICEO:");
int edadBuscadaLICEO = int.Parse(Console.ReadLine());
int posicionLICEO = Array.IndexOf(LICEO, edadBuscadaLICEO);
if (posicionLICEO != -1)
{
Console.WriteLine($"La edad {edadBuscadaLICEO} existe en la posición {posicionLICEO}.");
Console.WriteLine($"La edad en IUSH en la misma posición: {(posicionLICEO < IUSH.Count ? IUSH[posicionLICEO].ToString() : "N/A")}");
Console.WriteLine($"La edad en RONDALLA en la misma posición: {(posicionLICEO < RONDALLA.Count ? RONDALLA[posicionLICEO].ToString() : "N/A")}");
}
else
{
Console.WriteLine($"La edad {edadBuscadaLICEO} no existe en el LICEO.");
}
List<int> SALAZAR = LICEO.Concat(IUSH).ToList();
Console.WriteLine("Edades de SALAZAR: " + string.Join(", ", SALAZAR));
SALAZAR.Sort();
SALAZAR.Reverse();
Console.WriteLine("5 edades más altas de SALAZAR: " + string.Join(", ", SALAZAR.Take(5)));
Console.WriteLine("5 edades más bajas de SALAZAR: " + string.Join(", ", SALAZAR.OrderBy(x => x).Take(5)));
int[] edadesEntre15y25 = SALAZAR.Where(edad => edad >= 15 && edad <= 25).ToArray();
int cantidad = edadesEntre15y25.Length;
double porcentaje = (double)cantidad / SALAZAR.Count * 100;
Console.WriteLine($"Cantidad de edades entre 15 y 25 años: {cantidad}");
Console.WriteLine($"Porcentaje de edades entre 15 y 25 años: {porcentaje:F2}%");
}
}
Well, install works:
winget install --id Microsoft.Powershell
But the MS documentation says my original command should have worked. Frustrating.
Azure Database - I'm including here SQL Database, SQL Elastic Pool and MySQL Flexible Server - scaling cannot be performed real-time because it has a downtime. It can range from a few seconds, to a few hours depending on the size of your workload (Microsof Expresses this downtime in terms of "minutes per GB" in some of their articles).
See this post from 2017 where they describe downtimes of up to 6hours with ~250GB databases:
How do you automatically scale-up and down a single Azure database?
You probably know where I'm trying to get here. You automatically scale-up and down on your own. You need to either build your own tools or do it manually. There is no built-in support for this (and with reason).
I have to say that lately for Azure Sql Pools we are seeing extremely fast tier scaling (i.e < 1 min) with databases in the range of 100-200GB, so probaly the Azure team has come great lengths to improve changing tiers since 2017...
For MySQL FLexible Server I've seen it's almost never < than 4-5 minutes, even for small servers. But this is a very new service, I am sure it will get better with time.
The fact that you have this downtime is probably why Azure did not add out of the box autoscaling, providing users metrics and API's so they can choose when and how to scale according to their business needs and applications. Again, depending on your bussiness case and workload, those downtimes might be tolerable if properly handled (at specific times of the day, etc.)
I.e. for our development and staging environments we are using this (disclaimer, I built it):
https://github.com/david-garcia-garcia/azureautoscalerapp
and have setup rules that cater to our staging environment needs: pool scales automaticaly between 20DTU and 800DTU according to real usage. DTU's are scaled to a minimum of 50 between 6:00 and 18:00 to reduce disruption. Provisioned storage also scales and downscales automatically (in the staging pools we get databases added and removed automatically all the time, some are small, others several hundred GB's).
It does have a downtime, but it is so small, that properly educating our QA team allowed us to cut more than in half our MSSQL costs.
- Resources:
myserver_pools:
ResourceId: "/subscriptions/xxx/resourceGroups/mygroup/providers/Microsoft.Sql/servers/myserver/pool/{.*}"
Frequency: 5m
ScalingConfigurations:
Baseline:
ScaleDownLockWindowMinutes: 50
ScaleUpAllowWindowMinutes: 50
Metrics:
dtu_consumption_percent:
Name: dtu_consumption_percent
Window: 00:05
storage_used:
Name: storage_used
Window: 00:05
TimeWindow:
Days: All
Months: All
StartTime: "00:00"
EndTime: "23:59"
TimeZone: Romance Standard Time
ScalingRules:
autoadjust:
ScalingStrategy: Autoadjust
Dimension: Dtu
ScaleUpCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(3).Average() > 85" # Average DTU > 85% for 3 minutes
ScaleDownCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(5).Average() < 60" # Average DTU < 60% for 5 minutes
ScaleUpTarget: "(data) => data.NextDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleDownTarget: "(data) => data.PreviousDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleUpCooldownSeconds: 180
ScaleDownCoolDownSeconds: 3600
DimensionValueMax: "800"
DimensionValueMin: "50"
TimeWindow:
Days: ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
Months: All
StartTime: "06:00"
EndTime: "17:00"
TimeZone: Romance Standard Time
ScalingRules:
# Warm up things for office hours
minimum_office_hours:
ScalingStrategy: Fixed
Dimension: Dtu
ScaleTarget: "(data) => (50).ToString()"
# Always have a 100Gb or 25% extra space, whatever is greater.
fixed:
ScalingStrategy: Fixed
Dimension: MaxDataBytes
ScaleTarget: "(data) => (Math.Max(data.Metrics[\"storage_used\"].Values.First().Average.Value + (100.1*1024*1024*1024), data.Metrics[\"storage_used\"].Values.First().Average.Value * 1.25)).ToString()"
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
In my case, it occurred only due to the wrong .jks file selection. After making sure the accurate key was selected, the issue disappeared, and the build was successful.
I am also working with a naturally unicode language and the easiest and sure fix as follows:
1-] Download a unicode supporting true type font. An example is OpenDyslexic-Regular.
Here is GitHub repository for it.
2-] You either work in that directory you downloaded .ttf while in Python, or give the complete path.
3-]
pdf = FPDF()
pdf.add_page()
pdf.add_font("OpenDyslexic-Regular", "", "./OpenDyslexic-Regular.ttf", uni=True)
pdf.set_font("OpenDyslexic-Regular", "",8)
pdf.multi_cell(0, 10, txt="çiöüşp alekrjgnnselrjgnaej")
pdf.output("##.pdf" )
from fpdf import FPDF
from datetime import datetime
# বর্তমান তারিখ
today = datetime.today().strftime("%d/%m/%Y")
# চিঠির বিষয়বস্তু
letter_content = f"""
প্রাপক:
মরহুম মোঃ আমিনুর ইসলাম-এর পরিবার/উত্তরাধিকারীগণ
গ্রাম: পাঁচগাছী, উপজেলা: কুড়িগ্রাম সদর, জেলা: কুড়িগ্রাম।
মোবাইল: 01712398384
বিষয়: ধার পরিশোধ সংক্রান্ত
মান্যবর,
আমি মোঃ আলআমিন সরকার, পিতা: মৃত- আবুল হোসেন, গ্রাম: পলাশবাড়ী, ডাকঘর: খলিলগঞ্জ, উপজেলা: কুড়িগ্রাম সদর, জেলা: কুড়িগ্রাম। ২০১৬ সাল থেকে মরহুম মোঃ আমিনুর ইসলাম এর সঙ্গে সুসম্পর্কে ছিলাম। আমাদের ব্যক্তিগত সম্পর্কের ভিত্তিতে, তিনি আমার নিকট মোট চার ধাপে ৫৮,০০০/- টাকা ধার গ্রহণ করেন। নিচে ধার নেওয়ার তারিখ ও পরিমাণ উল্লেখ করা হলো:
১। [তারিখ] - [টাকার পরিমাণ]
২। [তারিখ] - [টাকার পরিমাণ]
৩। [তারিখ] - [টাকার পরিমাণ]
৪। [তারিখ] - [টাকার পরিমাণ]
এই লেনদেনগুলো আমার ব্যক্তিগত নোটবুকে লিখিত রয়েছে এবং একটি বা একাধিক লেনদেনের সময় একজন সাক্ষী উপস্থিত ছিলেন।
মরহুমের হঠাৎ মৃত্যুতে আমি গভীরভাবে শোকাহত, কিন্তু এই আর্থিক বিষয়টি নিয়ে আমি বিপাকে পড়েছি।
আপনাদের কাছে বিনীত অনুরোধ, মরহুমের সম্পত্তির উত্তরাধিকারী হিসেবে এই দেনার বিষয়টি বিবেচনায় এনে তা পরিশোধের ব্যবস্থা গ্রহণ করবেন।
আপনাদের সদয় সহযোগিতা প্রত্যাশা করছি।
ইতি,
মোঃ আলআমিন সরকার
মোবাইল: 01740618771
তারিখ: {today}
Ah, I feel your frustration — industrial cameras can definitely be tricky to get working with libraries like EmguCV, especially when they rely on special SDKs or drivers. Let’s break it down and see how we can get things moving.
EmguCV (just like OpenCV, which it's based on) uses standard interfaces (like DirectShow on Windows, or V4L on Linux) to access cameras. So if your industrial camera:
Requires a proprietary SDK, or
Doesn't expose a DirectShow interface,
then EmguCV won’t be able to see or use it via the usual Capture
or VideoCapture
class.
Does your camera show up in regular webcam apps?
If it doesn’t show up in apps like Windows Camera or OBS, then it’s not available via DirectShow — meaning EmguCV can’t access it natively.
Check EmguCV camera index or path
If the camera does appear in regular apps, you can try:
csharp
CopyEdit
var capture = new VideoCapture(0); // Try index 1, 2, etc.
But again, if your camera uses its own SDK (like Basler’s Pylon, IDS, Daheng SDK, etc.), this won’t work.
Most industrial cameras provide their own .NET-compatible SDKs. Use that SDK to grab frames, then feed those images into EmguCV like so:
csharp
CopyEdit
// Assume you get a Bitmap or raw buffer from the SDKBitmap bitmap = GetFrameFromCameraSDK(); // Convert to EmguCV Mat Mat mat = bitmap.ToMat(); // or use CvInvoke.Imread if saving to disk temporarily // Now use EmguCV functions on mat
You’ll basically use the vendor SDK to acquire, and EmguCV to process.
If you're feeling ambitious and want to keep using EmguCV’s patterns, you could extend Capture
or create a custom class to wrap your camera SDK, but that’s quite involved.
EmguCV doesn’t natively support cameras that require special SDKs.
Most industrial cameras do require their own SDKs to function.
Your best bet: Use the SDK to get frames, then convert those into Mat
or Image<Bgr, Byte>
for processing.
Is there a specific need to use the "product" Model? Maybe an abstract Model would do the trick in your case?
class products(models.Model): # COMM0N
product_name = models.CharField(max_length=100)
product_id = models.PositiveSmallIntegerField()
product_desc = models.CharField(max_length=512)
[. other śhared fields and functions]
class Meta:
abstract = True`
class shirt(product):
class Size(models.IntegerChoices):
S = 1, "SMALL"
M = 2, "MEDIUM"
L = 3, "LARGE"
# (...)
size = models.PositiveSmallIntegerField(
choices=Size.choices,
default=Size.S
product_tyoe = "shirt"
)
Mermaid.ink has been known to timeout when rendering graphs with very short node names like "A", particularly in Jupyter notebooks using langgraph. This appears to be a bug or parsing edge case in Mermaid.ink’s backend. Longer node names such as "start_node" or "chatbot" tend to work reliably and avoid the issue. Interestingly, the same Mermaid code usually renders fine in the Mermaid Live Editor, suggesting the problem is specific to Mermaid.ink’s API or langgraph’s integration. Workarounds include using longer node names, switching to Pyppeteer for local rendering, or running a local Mermaid server via Docker.
Boombastick,
I know it's been awhile since you asked but I recently discovered that there are some unanswered question on Stackoverflow regarding xeokit SDK. So, just in case that it might still be relevant and might be good to know for others, too, we always recommend the following steps when trying to tackle an issue:
In your particular case, it would be useful if you can reproduce the bug with one of the SDK or BIM Viewer examples from https://xeokit.github.io/xeokit-sdk/examples/index.html and then post it on the GitHub Issues. There usually there is someone to take care of bugs.
@ValidateIf
and Inside It Handle the LogicInstead of layering multiple @ValidateIf
s and validators, consolidate validation using a single custom @ValidateIf()
for each conditional branch.
@ValidateIf((o) => o.transactionType === TransactionType.PAYMENT)
@IsNotEmpty({ message: 'Received amount is required for PAYMENT transactions' })
@IsNumber()
receivedAmount?: number;
@ValidateIf((o) => o.transactionType === TransactionType.SALE && o.receivedAmount !== undefined)
@IsEmpty({ message: 'Received amount should not be provided for SALE transactions' })
receivedAmount?: number;
Create a custom validator for receivedAmount
:
import {
registerDecorator,
ValidationOptions,
ValidationArguments,
} from 'class-validator';
export function IsValidReceivedAmount(validationOptions?: ValidationOptions) {
return function (object: any, propertyName: string) {
registerDecorator({
name: 'isValidReceivedAmount',
target: object.constructor,
propertyName: propertyName,
options: validationOptions,
validator: {
validate(value: any, args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'PAYMENT') {
return typeof value === 'number' && value !== null;
} else if (obj.transactionType === 'SALE') {
return value === undefined || value === null;
}
return true;
},
defaultMessage(args: ValidationArguments) {
const obj = args.object as any;
if (obj.transactionType === 'SALE') {
return 'receivedAmount should not be provided for SALE transactions';
}
if (obj.transactionType === 'PAYMENT') {
return 'receivedAmount is required for PAYMENT transactions';
}
return 'Invalid value for receivedAmount';
},
},
});
};
}
Very late to the game, but got a nice workaroud.
Add a container element into the rack, and then add your smaller equipment into it.
This way it will work with other rack elements, but won't be resized.
It seems the answer is to leave out the cross compilation arguments.
export CFLAGS="-arch x86_64"
./configure --enable-shared
Configure gets confused about cross compilation on macOS, because when it tries to execute a cross compiled program, the program does not fail (thanks to Rosetta, I presume).
You should always do your due diligence when adding a new package to your codebase, at the end of the day it is 3rd party code.
I think your main worry is your credentials being exposed. This package in particular seems to be popular enough to be battle tested and trusted by a good chunk of the community.
I think you'll be fine. Just remember to keep your credentials a secret and that means not adding them to version control. Use env variables or any of the other methods listed here to set your credentials.
Agrega esto a tu functions.php
function ninja_table_por_post_id() {
$post_id = get_the_ID();
$shortcode = '[ninja_tables id="446" search=0 filter="' . $post_id . '" filter_column="Filter4" columns="name,address,city,website,facebook"]';
return do_shortcode($shortcode);
}
add_shortcode('tabla_post_actual', 'ninja_table_por_post_id');
Este código crea un nuevo shortcode llamado [tabla_post_actual]
, que al insertarlo en cualquier plantilla o contenido de WordPress, mostrará la tabla filtrada por el ID del post actual.
Modo de uso:
<?php echo do_shortcode('[tabla_post_actual]'); ?>
For all modern versions of the sdk, it's just dotnet fsi myfile.fsx
Based on the available information, Babu89BD appears to be a web-based platform, likely associated with online gaming or betting services. However, the site https://babu89bd.app provides very limited public information about what the app actually does, how to use it, or whether it's secure and legitimate.
If you're trying to figure out its purpose:
It seems to require login access before showing any details, which may be a red flag.
The site doesn't list a privacy policy, terms of service, or contact information—important factors for trust and transparency.
The design and naming resemble other platforms often used for online gambling, especially popular in South Asia.
Caution is advised if you're unsure about the legitimacy. Avoid entering personal or financial information until you can verify its credibility.
If anyone has used this app and can confirm its features or authenticity, please share your insights.
From what I could find on the JetBrains website, you can disable it by including an empty file named .noai in IntelliJ.
This worked for me, so it should hopefully work for you as well.
Solo ejecuta el comando:
composer remove laravel/jetstream
I am not sure who these anchor boxes are coming from?
Are these defined during the model training process?
I am doing custom object detection using mediapipe model_maker but it creates a model that has two outputs. The outputs are of Shape=[ 1 27621 4] and Shape=[ 1 27621 3].
I am totally confused on what is going on and how can I get the four outputs? I want output locations, classes, scores, detections.
Following is my current code, please help me understand what's going on and how to obtain the desired outputs?
# Set up the model
quantization_config = quantization.QuantizationConfig.for_float16()
spec = object_detector.SupportedModels.MOBILENET_MULTI_AVG
hparams = object_detector.HParams(export_dir='exported_model', epochs=30)
options = object_detector.ObjectDetectorOptions(
supported_model=spec,
hparams=hparams
)
# Run retraining
model = object_detector.ObjectDetector.create(
train_data=train_data,
validation_data=validation_data,
options=options)
# Evaluate the model
loss, coco_metrics = model.evaluate(validation_data, batch_size=4)
print(f"Validation loss: {loss}")
print(f"Validation coco metrics: {coco_metrics}")
# Save the model
model.export_model(model_name=regular_output_model_name)
model.export_model(model_name=fp16_output_model_name, quantization_config=quantization_config)
Using %d with sscanf can cause a problem, because %d can be 2 bytes or 4 bytes.
It is said in earlier days %d used to be 2 bytes, but in more modern environments %d became 4 bytes. To be certain it is 2 bytes, replace %d with %hu or %hd or %hi
Came back to this question years later to offer an update.
Laravel 12 has a new feature called Automatic Eager Loading, which fixed this issue of eager loading in recursive relationships for me.
https://laravel.com/docs/12.x/eloquent-relationships#automatic-eager-loading
The command to install the stable version of PyTorch (2.7.0) with CUDA 12.8 using pip on Linux is:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Store Tenant Info Somewhere Dynamic Instead of putting all your tenant info (like issuer and audience) in appsettings.json
, store it in a database or some other place that can be updated while the app is running. This way, when a new tenant is added, you don’t need to restart the app
Figure Out Which Tenant is Making the Request When a request comes in, figure out which tenant it belongs to. You can do this by:
Checking a custom header (e.g., X-Tenant-Id
)
Looking at the domain they’re using
Or even grabbing the tenant ID from a claim inside the JWT token
Validate the Token Dynamically Use something called JwtBearerEvents
to customize how tokens are validated. This lets you check the tenant info on the fly for each request. Here’s how it works:
When a request comes in, grab the tenant ID
Look up the tenant’s settings (issuer, audience, etc.) from your database or wherever you’re storing it
Validate the token using those settings
This could be helpful: https://github.com/mikhailpetrusheuski/multi-tenant-keycloak and this blog post: https://medium.com/@mikhail.petrusheuski/multi-tenant-net-applications-with-keycloak-realms-my-hands-on-approach-e58e7e28e6a3
Shoutout to Mikhail Petrusheuski for the source code and detailed explanation!
Not sure if anyone is monitoring thread but better late than never. We have launched a new unified Gitops controller for ECS (EC2 and Fargate) and Lambda. EKS is also coming soon. Check it out and would to engage on this - https://gitmoxi.io
I had same problem and resolve it by adding .python in between tensorflow an keras. so instead of tensorflow.keras, I wrote : tensorflow.python.keras
Adding GeneratedPluginRegistrant.registerWith(flutterEngine);
to MainActivity.kt
did work for me.
import io.flutter.plugins.GeneratedPluginRegistrant
//...
override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
GeneratedPluginRegistrant.registerWith(flutterEngine);
configureChannels(flutterEngine)
}
Sousce:
https://github.com/firebase/flutterfire/issues/9113#issuecomment-1188429009