const cors = Require [ 'cors '] app .usse [ cors ][]
This may not be the cleanest solution and I'll still leave this question as unsolved as it isn't fully understood in my opinion. Yet still I want to share my experience and how it worked out for me.
I modified it a little bit but I used the code from this post as minimal working example. I thought map_id
has to be the same as activeMapType: map.supportedMapTypes[map_id]
but this seems not to be the case. Using map_id = 1
and map.supportedMapTypes[0]
resulted in the desired Street Map map type and using offline tiles.
I wrote an article on searching encrypted birthdays. As long as the data you are encrypting/searching isn't identifiable by its self, this method works well.
https://www.sqlservercentral.com/articles/searching-an-encrypted-column
Today Google will show the test it makes so you can confirm if Google is using the up to date validation as per Schema.ORG CSV or JSON file for allowed @type elements. Simply hover over the 'failed' test element and then compare to : https://github.com/schemaorg/schemaorg/blob/main/data/releases/14.0/schemaorg-all-https.jsonld
Line 19554 confirms.
Serilog–Sub loggers would be an adequate start. Almost your exact example.
To ensure the public/ directory is correctly exposed when deploying a PHP project on Vercel, you need to configure the routing so that all incoming requests are internally directed to that folder. This allows users to access the app without explicitly including public/ in the URL. Typically, this is done by setting up URL rewrites in the deployment configuration so that the public/ folder acts as the root for your application. Additionally, it’s important to place all your public-facing files inside that directory and avoid referencing the public/ folder in your paths or links directly. This setup helps maintain a clean and secure URL structure while aligning with Vercel's deployment model.
Thanks for help, I change a little bit inputs, reinstal python and lybraries to have path set automaticly and finnaly its giving me hand landmarks :)
First hand landmarks: [NormalizedLandmark(x=0.35780423879623413, y=0.005128130316734314, z=7.445843266395968e-07, visibility=0.0, presence=0.0),
What I finally have is:
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from mediapipe.tasks.python.core.base_options import BaseOptions
# Use an absolute path
task_path = Path(__file__).parent / "hand_landmarker.task"
print("Resolved model path:", task_path)
if not task_path.exists():
raise FileNotFoundError(f"Model file not found at: {task_path}")
options = HandLandmarkerOptions(
base_options=BaseOptions(model_asset_path=str(task_path.resolve())),
running_mode=VisionRunningMode.IMAGE,
num_hands=2
)
detector = vision.HandLandmarker.create_from_options(options)
# Hands
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cv2.cvtColor(cv2_image, cv2.COLOR_BGR2RGB))
hands_results = detector.detect(mp_image)
results['hands'] = hands_results.hand_landmarks
hand_landmarks = results.get('hands')
first_hand = hand_landmarks[0] if hand_landmarks and len(hand_landmarks) > 0 else None
print("First hand landmarks:", first_hand)
You fail to call your mainLoop()
. You initialize your app, but all that does is call the constructor. Add myApp.mainLoop()
at the end.
You can do this using laravel eloquant. First create these modals
Country, PaymentMethod, CountryPaymentMethod, PaymentMethodConfiguration
And then create these relationships inside modals.
// Country.php
public function countryPaymentMethods()
{
return $this->hasMany(CountryPaymentMethod::class);
}
// PaymentMethod.php
public function countryPaymentMethods()
{
return $this->hasMany(CountryPaymentMethod::class);
}
// CountryPaymentMethod.php
public function configurations()
{
return $this->hasMany(PaymentMethodConfiguration::class);
}
public function country()
{
return $this->belongsTo(Country::class);
}
public function paymentMethod()
{
return $this->belongsTo(PaymentMethod::class);
}
// PaymentMethodConfiguration.php
public function countryPaymentMethod()
{
return $this->belongsTo(CountryPaymentMethod::class);
}
And you can query them like this
$country = Country::where('name', 'Ireland')->first();
$paymentMethod = PaymentMethod::where('name', 'Mobile Money')->first();
$configurations = PaymentMethodConfiguration::whereHas('countryPaymentMethod', function ($query) use ($country, $paymentMethod) {
$query->where('country_id', $country->id)
->where('payment_method_id', $paymentMethod->id)
->where('is_active', true);
})->get();
Study about this more here
Laravel - Eloquent "Has", "With", "WhereHas" - What do they mean?
Thank you for the feedback. I am the author of grapa. It started out as more of a personal project where I could test out ideas, but evolved over time. There are a lot of capabilities in the language that I am finding with ChatGPT and Cursor are unique. With the help of Cursor, I've revamped the docs considerably, and also revised the CLI interface. The CLI piping is now there.
https://grapa-dev.github.io/grapa/about/
https://grapa-dev.github.io/grapa/cli_quickstart/
See the above for the new CLI options.
Just recently added is what appears to be the most feature rich and performant grep available.
https://grapa-dev.github.io/grapa/grep/
I will be focusing on completing the language over the next few months - it is pretty complete as it is and production ready and well tested and stable (with the exception of a ROW table type with the DB, but COL column store works very well and is quite a bit faster anyway).
I implemented the file system and DB using the same structure so you traverse the DB with similar commands you would a file system. And the DB supports multiple levels with GROUP.
https://grapa-dev.github.io/grapa/sys/file/
The docs were fully created by GenAI tools - and GenAI does not have the pattern of the grapa language, so it keeps generating grapa code with javascript or python syntax. I had it create the basic syntax doc in the docs to help constrain this, and that helps a great deal, but it doesn't always reference it. I still need to scrub all the docs to verify/fix things.
Your understanding is correct: volumeBindingMode belongs to StorageClass. The Helm chart's templating is likely using this variable to configure an associated StorageClass, and it's not being directly applied to the PersistentVolumeClaim in the final manifest.
If you find volumeBindingMode actually appearing in the rendered PVC YAML, then that would indicate an incorrect Helm chart definition, which should be reported to the chart maintainers.
https://www.parseviewerx.com/json/json_viewer
try this out
“I recently tried an online JSON viewer tool and found it amazing. I would like to implement something similar on my site to enhance usability.”
If I get it right, you can easily achieve this by using scipy.ndimage
module. Like this:
import numpy as np
from scipy.ndimage import label, find_objects
DashBoard = np.zeros((10,10), dtype=int)
DashBoard[5,5] = 1
DashBoard[5,4] = 1
DashBoard[5,6] = 1
DashBoard[6,6] = 1
DashBoard[7,7] = 1
print(DashBoard)
initial Dashboard. note the "alien" at (7,7)
# define which neighbours count as adjacent (in this case, exclude diagonals)
s = [[0,1,0],
[1,1,1],
[0,1,0]]
# label adjacent "islands"
labeled, nums = label(DashBoard, structure=s)
print(nums)
print(labeled)
labeled visualized:
loc = find_objects(labeled)[0]
res = labeled[loc]
res visualized:
print(res.T.shape)
will give you
(3, 2)
Only the data manipulation language (DML) changes are updated automatically during continuous migrations, whereas the data definition language (DDL) changes are the user's responsibility to ensure compatibility between the source and destination databases, and can be done in two ways—refer to this documentation.
It is also recommended to review the known limitations of your migration scenario before proceeding with the database migration. See here for the limitations specific to using a PostgreSQL database as the source.
But how will i run it on debug mode ?
@mass-dot-net:
Resolving the Cookies problem just requires creating your own implementation of HttpCookies that you can instantiate:
public class FakeHttpCookies : HttpCookies
{
public override void Append(string name, string value)
{
throw new NotImplementedException();
}
public override void Append(IHttpCookie cookie)
{
throw new NotImplementedException();
}
public override IHttpCookie CreateNew()
{
throw new NotImplementedException();
}
}
Now you can set the value in the MockHttpResponseData
constructor:
public MockHttpResponseData(FunctionContext functionContext, HttpCookies? cookies = null) : base(functionContext)
{
Cookies = cookies ?? new FakeHttpCookies();
}
Especially you want to create Indexed View (Materialized View) with UNION inside that view which is not possible: Cannot create index on view 'sql.dbo.vw_INDX' because it contains one or more UNION, INTERSECT, or EXCEPT operators. Consider creating a separate indexed view for each query that is an input to the UNION, INTERSECT, or EXCEPT operators of the original view.
But I need that view in another Query to join with.
PS: MS SQL Server 2014 Enterprise
I need help in adobe to create conditional font color for any value within a text field that is greater than 0 to be green, anything less than 0 to be red, and anything neutral to be black...? i am not a coder so any help to assist with custom java script in my pdf form would be helpful?
To limit Tailwind CSS styles to a specific React component, you can scope Tailwind classes using custom prefixes, context wrappers, or CSS Modules. Tailwind is global by default, so it doesn't naturally scope to one component
My observation is this. Schema.org content can be deployed in both the HEADER and BODY. The question is not cut and dry. The nuance is the size of the script and what tools you use to deploy.
Consider page load, time to paint implications too.
In a world of AI engines we need to tackle this problem further upstream and look to create AI endpoints with full JSON-LD Schema.org content files for real-time ingestion. Not depend on page by page loading.
There is an initiative, see git Schema.txt, to direct this traffic via robots.txt to a dedicated schema.txt file that contains links to json-ld files and catalog schema.org @IDs. AI crawlers can then access what they need to run semantic queries, that replace today's regular search. In turn this makes your Schema metadata even more precious to your future website.
When it comes down to numbers, then everybody is just "silent". Its a secret in data mining and huge conspiracy against end user ! Well i test a lot and i can only say, someone is lying big time ! All those tools aren't working the way are supposed to, results obtained in reality are clear evidence off that ! And all i hear in excuse of those tools used, "your data quality ins't good enough". What that sentence means in real, model cant learn from data (but it should due weights adjusted in model) !? Tools way to poor developed. Optuna in the wastes of HP space should find proper HP and model obtained should converge to that target. If u use the same file setup as in optuna with same classifier and target column u should get prety much the same results when test on rest file is used for that particular target column. Why those results differ so much its clue that i still figure it up. Its a math, that reproduces the same results then !? If not there is something wrong with math, weights aren't the same thus results are always worse. How to solve this i don't know. At the end i always manually tune models, find proper HP sometimes, that optuna newer finds, but it should. AI should do model HP search with all the needed metrics and evaluation plots and explanations. We don't have time to fool around with manualism's altogether. And i thought optuna will solve my problems too. Optuna at my experience newer delivers good models and i try out all the optionable metrics in optuna even custom balanced AUC original conditions formula as customizey by my ovn. Optuna simply isn't up the job. Issues for that can be very vast and all u hear is all philosophizing about it.
if you want to print hello world or just hello i know how to do that code using java
class main{
public static void main(String[]args){
System.out.println("Hello World")
}
}
if you want only hello then inside print function write ony hello
You're asking a few different things here. First, you already partly answered your question on ESM compatibility, and partly the const-style enums etc. The remaining question is how to share this between angular and nest.
Nothing in a DTO should be Angular or Nest specific. How you set it up is kinda up to you, and out of scope of at least the titular question. So I'll take the second part to try to answer.
---
What you want to do here is essentially share an typescript package, between two different typescript projects. How, depends on how your projects are set up. I can only suggest vague ideas and concepts, you'll have to either clarify the question further, or figure out the rest of the way on your own.
Do you have a monorepo set up? Something like nx.dev or turborepo? npm workspaces? They each have their own ways of setting up and sharing projects.
Otherwise, if you have separate repos, you could again do a few things differently. For one - you could install the DTO in, say, your NestJS package. You would have to export that library as an actual export. You could then add that entire package as a dependency for your Angular project. That would, of course, suck for various reasons.
The last option I'll propose is to make a third package, call it "validations" or something similar, and share that as a dependency for both other packages.
That's how you can get clean isolation, no duplication, even versioning support, support tree shaking properly (so that you actually import typescript files into your sources, before you transpile the resulting app code into javascript.
Seems to be working now in beta4.
Something like this?
>>> p=r"([^()]*)"
>>> re.sub(p," ","Hello (this is extra) world (more text)")
'Hello world '
Maybe a little late for you, but the best way to do this is use useFetchClient
const { get, post } = useFetchClient();
const test = await get('/users')
this uses internal admin routes with authentication
Not shared by default, but you can configure it
https://learn.microsoft.com/en-us/entra/msal/dotnet/how-to/token-cache-serialization?tabs=msal
That little “highlight” (called a tab attention indicator) is controlled by the browser itself, and you cannot manually trigger it via JavaScript. Browser only shoes this for specific built-in events like:
•alert(), confirm(), or prompt() dialogs.
•some types of audio playing.
•push notifications.
It is a fundamental part of the UX of the browser, hence it can't be controlled whatsoever.
auto_lang_field
packageI was working on a feature in a project and I want to catch the locale of the text inside the TextField
to change the direction
of the widget and TextStyle
as well
I have build a flutter package to handle this.
pub.dev
: auto_lang_field
flutter pub add auto_lang_field
For customization I also created this repo contains +81 languages data to customize the language detection if you want languages, also check the Language Detection Considerations in the package
README
section for more details
I really hope this package is helpful for the Flutter developers
Have you considered edge cases where the timezone shifts due to DST or other regional rules? Using Python’s zoneinfo
(Python 3.9+) or pytz
can help ensure the billing cycle consistently aligns with the 26th–25th range across different timezones.
Oh I got solution for this already, my problem wasn't AJV validation at all, my mistake was using a select option with a value = "", this will made the Select element required by Html.
<select>
<option value = ""> option 0 </option>
<---- this will make select tag required.
<select>
Can I apply the required attribute to <select> fields in HTML?
Although this solution may not be an automation, but you can manually specify the folder name in which you want to clone your repository. Here's a way to do it:
git clone https://github.com/xyz/project1.git xyz/project1
This will clone your repository within xyz
directory, in the project1
sub-directory.
have you found a solution to this problem? Thank you in advance for your response
The router address you're using (0xE592427A0AEce92De3Edee1F18E0157C05861564) is incorrect for Sepolia— that's the mainnet address for Uniswap V3's SwapRouter. On the Sepolia testnet, you should use the SwapRouter02 address at 0x3bFA4769FB09eefC5a80d6E87c3B9C650f7Ae48E instead.
In Remix, when the transaction simulates and fails, expand the error message—it often includes a specific revert reason (e.g., "INSUFFICIENT_LIQUIDITY" or "INVALID_POOL"). That can pinpoint if it's something beyond the router address.
Reverting to 1.17.5 seems to have fixed the problem for me.
I opened a gh issue so maybe it will be fixed in 1.18.2
The problem with the PIVOT function is my case I get insufficient privileges error message(s). Not sure which calls within the pivot function(s) are causing this error.
Metpy divergence computation uses the three-point method outlined in "Derivative formulae and errors for non-uniformly spaced points" Proceedings of the Royal Society A - May 2005 DOI:10.1098/rspa.2004.1430
Free download at https://www.researchgate.net/publication/228577212
SELECT
*
FROM your_table
WHERE
xmin = pg_current_xact_id()::xid
;
I asked this question on the Ansible forums as well and received a response.
Quick link: Error: 'ProxmoxNodeInfoAnsible' object has no attribute 'module' · Issue #114 · ansible-collections/community.proxmox · GitHub - i.e. check your proxmoxer
version.
TLDR: ProxMox v.8.4 only comes with proxmoxer 1.2.0. Needs to be >=2.2. Best option suggested is to create a venv for python3 and run a special ansible user on the proxmox host.
It works for me, it is a save place ibm.biz don't worry.
to prevent potential disruptions in the main automation flow, especially in cases where the MySQL database is unavailable or encounters an error, all logging operations are executed asynchronously. This is implemented using the "Execute Workflow" node with the "Wait for Completion" option set to false. By offloading the logging process to an independent sub-workflow, the system ensures that logging tasks are executed in the background without impacting the execution or speed of the main workflow. The sub-workflow contains the logic to insert logs into the MySQL table and includes proper error handling (e.g., try-catch) to silently handle any database issues. This design pattern offers a reliable and fault-tolerant approach to logging, maintaining data traceability without compromising workflow continuity.
This code accesses the "ColumnTitle" (e.g., "Previous Value") from a SharePoint HTTP request output, extracting a list of user details with LookupId, LookupValue (name), and Email from a nested JSON structure.
DSC00005
DSC00009
DSC00016
DSC00052
DSC00064
DSC00110
DSC00111
DSC00115
DSC00122
DSC00125
DSC00127
DSC00144
DSC00158
DSC00161
DSC00170
DSC00174
DSC00179
DSC00188
DSC00198
DSC00203
DSC00213
DSC00238
DSC00253
DSC00262
DSC00273
DSC00291
DSC00307
DSC00311
DSC00320
DSC00324
DSC00340
DSC00366
DSC00393
DSC00404
DSC00423
DSC00439
DSC00453
DSC00466
DSC00471
DSC00477
DSC00479
DSC00481
DSC00485
DSC00488
DSC00493
DSC00495
DSC00496
DSC00504
DSC00505
DSC00514
DSC00518
DSC00523
DSC00527
DSC00534
DSC00538
DSC00545
DSC00559
DSC00560
DSC00562
DSC00563
DSC00569
DSC00570
DSC00576
DSC00578
DSC00581
DSC00589
DSC00592
DSC00595
DSC00611
DSC00635
DSC01622
DSC01626
DSC01628
DSC01640
DSC01641
DSC01645
DSC01648
DSC01655
DSC01663
DSC01668
DSC01671
DSC01672
DSC01676
DSC01689
DSC01708
DSC01710
DSC01712
DSC01718
DSC01725
DSC01727
DSC01734
DSC01739
DSC01741
DSC01756
DSC01759
DSC01762
DSC01767
DSC01769
DSC01774
DSC01778
DSC01784
DSC01790
DSC01800
DSC01803
DSC01808
DSC01815
DSC01816
DSC01824
DSC01826
DSC01834
DSC01838
DSC01844
DSC01845
DSC01849
DSC01851
DSC01855
DSC01858
DSC01860
DSC01880
DSC01896
DSC01904
DSC01909
DSC01912
DSC01917
DSC01922
DSC01930
DSC01932
DSC01937
DSC01943
DSC01945
DSC01948
DSC01954
DSC01957
DSC01960
DSC01964
DSC01970
DSC01972
DSC01973
DSC01979
DSC01980
DSC01981
DSC01986
DSC01991
DSC01992
DSC01998
DSC02009
DSC02013
DSC02014
DSC02017
DSC02025
DSC02027
DSC02029
DSC02038
DSC02040
DSC02042
DSC09511
DSC09534
DSC09544
DSC09548
DSC09550
DSC09559
DSC09567
DSC09579
DSC09588
DSC09592
DSC09634
DSC09640
DSC09662
DSC09666
DSC09672
DSC09680
DSC09702
DSC09707
DSC09713
DSC09716
DSC09721
DSC09746
DSC09777
DSC09845
DSC09871
DSC09945
DSC09949
DSC09955
DSC09989
IMG_1623
IMG_1628
IMG_1632
IMG_1636
IMG_1638
IMG_1640
IMG_1655
IMG_1661
IMG_1663
IMG_1666
IMG_1669
IMG_1670
IMG_1673
IMG_1674
IMG_1677
IMG_1679
IMG_1680
IMG_1683
IMG_1686
IMG_1691
IMG_1695
IMG_1697
IMG_1699
IMG_1701
IMG_1703
IMG_1704
IMG_1706
IMG_1709
IMG_1711
IMG_1713
IMG_1715
IMG_1718
IMG_1720
IMG_1721
IMG_1723
IMG_1728
IMG_1729
IMG_1732
IMG_1733
IMG_1735
IMG_1738
IMG_1740
IMG_1742
IMG_1743
IMG_1746
IMG_1748
IMG_1750
IMG_1755
IMG_1757
IMG_1760
IMG_1761
IMG_1765
IMG_1766
IMG_1769
IMG_1771
IMG_1773
IMG_1775
IMG_1777
IMG_1789
IMG_1791
IMG_1792
IMG_1797
IMG_1798
IMG_1801
IMG_1803
IMG_1805
IMG_1806
IMG_1809
IMG_1813
IMG_1814
IMG_1817
IMG_1821
IMG_1824
IMG_1827
IMG_1828
IMG_1833
IMG_1834
IMG_1838
IMG_1840
IMG_1841
IMG_1844
IMG_1845
IMG_1849
IMG_1850
IMG_1859
IMG_1861
IMG_1863
IMG_1870
IMG_1872
IMG_1875
IMG_1879
IMG_1882
IMG_1884
IMG_1887
IMG_1891
trying different things it looks like it could be the names of arguments in wnominate(). try adding "rcObject" argument:
res <- wnominate(
rcObject = rc_samp,
dims = 1,
polarity = 4,
minvotes = 2
)
After 10 days of debugging, I found the root cause: I forgot to define the key and value in my Kubernetes ConfigMap in the Terraform config. This had several consequences:
The application received an empty list for `allowedOrigins` in the CORS configuration
The CORS filter couldn't match the request's origin against the empty allowed origins list
For preflight requests, the filter simply passed the request down the chain
Since I hadn't implemented any handler for OPTIONS requests in my application, it resulted in a 405 Method Not Allowed error
Hy, a bit late but i made a workaround :
I provide here the Javacode, the needed html been explained on the top.
// This Code needs a InputField with ID "Sequenz"
// - a div with ID "Feedback" to show keydown char
// - a div with ID "FeedbackU" to show input char
// - a div with ID "key" to show entered key
function init(){
document.addEventListener('keydown', function(event){keydown(event);})
document.getElementById('Java').innerHTML="Keyress";
//Use this Line to add Events to every Input Field, just call with the ID - Number
AddEventtoInput("Sequenz");
}
var handled = false;
function keydown(event){
if(event.which == 229){
return;
}else{
handled = true;
document.getElementById('Feedback').innerHTML = event.key;
if (keypress(event.key) == true){
event.preventDefault();
}
}
// ----- Demonstration of values. Please Delete.
document.getElementById('Feedback').innerHTML = document.getElementById('Feedback').innerHTML + " - " + event.which;
// ------------------------
}
function AddEventtoInput(div){
document.getElementById(div).addEventListener("input", keyinput);
}
function keyinput(event){
var key = event.target.value;
key = key.slice(-1);
if(key.indexOf('\t') != -1){key = "Tab";}
if(handled == false){
// ----- Demonstration of values. Please Delete.
document.getElementById('FeedbackU').innerHTML = key;
// ------------------------
if (keypress(key) == true){
event.target.value = event.target.value.slice(0,-1);
}
}else{
handled = false;
}
}
function keypress(key){
prevent = false;
//------------------------ Here Code to Check entered Char, Change prevent to true to prevent Char
document.getElementById('key').innerHTML = key;
//-----------------------
return prevent;
}
I have written this code and it is free of use (Public Domain). Please help yourself to add it in your project.
M. Glaser
Munich Germany
Looks like this issue has been fixed in the last few versions of Angular.
Here is an example of Angular Material components working inside a component with encapsulation
set to ShadowDom
:
And if you need the contents of cdkOverlay, modals, etc to be inside the shadow dom, here is a proof of concept doing that:
It seems like the n=n+1
is inside the for
loop, which may be causing it to increment multiple times unexpectedly. You might be skipping some indexes or going out of range. Try moving the increment outside the for
loop or consider using a nested loop instead of combining while
and for
.
I am writing a Python compatible syntax, but the concept should work for C++ too.
comboBox.setEditable(True)
comboBox.lineEdit().setAlignment(Qt.AlignCenter)
comboBox.lineEdit().setReadOnly(True)
Idea is, to make it editable, but setting lineEdit
inside QComboBox
as read-only later.
This code defines a function to find matches of a regex pattern that occur only outside paranthasis in a string by manually tracking nesting depth, ensuring precise, context-aware pattern matching.
One alternative is the Boxtin project, but as of now, it's not yet released. "Boxtin is a customizable Java security manager agent, intended to replace the original security manager"
You should add missing foreign keys to ensure data integrity and generate accurate database diagrams. However, don’t add all of them at once — do it in phases. First, check for invalid data that may violate the constraints. Adding too many FKs at once can cause locks, performance hits, or app errors if the data isn’t clean. Test changes in a dev environment, fix any orphaned rows, and roll out gradually. This improves design, avoids bad data, and helps tools like SSMS show correct relationships.
This happened to me, using the JUCE library with Projucer. Uninstalled the app from my phone, then ran it on Android Studio again.
I think this AI-assisted answer is the clearest:
=EOMONTH(A1,0)-MOD(WEEKDAY(EOMONTH(A1,0),2)-5,7)
A1 is a date value with any date in the subject month
-5 is for Friday. For other days:
-4 is for Thursday
etc for the sequence -1 (Monday) through -7 (Sunday)
Can someone create a zig zag indicator with ATR filter?
This is such a good game. This code implements a basic Tic-Tac-Toe game using Jetpack Compose in Kotlin, handling game logic, state management, UI rendering, and player turns with reactive updates and modal for end-game results like win or draw.
I was able to make this work by changing the 'print_info' argument to 'print_stats'. The documentation for the tg_louvain graph data science library function (https://docs.tigergraph.com/graph-ml/3.10/community-algorithms/louvain) incorrectly mentions the argument to print the result as 'print_info', I simply referred to the actual implementation here https://github.com/tigergraph/gsql-graph-algorithms and figured out what the correct argument name was.
Tried using the server’s own cert for verify, but it failed with an issuer error. Makes sense now: it’s just the leaf cert, not the whole trust chain.
This version has a conversational tone and reflects your own debugging experience, like how real developers talk when they’re troubleshooting quirks.
how's this?
I had to get rid of the sub picker each time the category changed... redundant code... but this worked...
struct ContentView2: View {
enum Category: String, CaseIterable, Identifiable {
case typeA , typeB , typeC
var id: Self { self }
var availableOptions: [ Option ] {
switch self {
case .typeA : return [ .a1 , .a2 ]
case .typeB : return [ .b1 , .b2 ]
case .typeC : return [ .c1 , .c2 ]
}
}
}
enum Option: String, CaseIterable, Identifiable {
case a1 , a2 , b1 , b2 , c1 , c2
var id: Self { self }
}
@State private var selectedCategory: Category = .typeA
@State private var selectedOption: Option = .a1
var body: some View {
Form {
Section ( header: Text ( "Selection" ) ) {
HStack {
Picker ( "" , selection: $selectedCategory ) {
ForEach ( Category.allCases ) { category in
Text ( category.rawValue.capitalized ) .tag ( category )
}
}
Spacer()
switch selectedCategory {
case .typeA:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
case .typeB:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
case .typeC:
Picker ( "" , selection: $selectedOption ) {
ForEach ( self.selectedCategory.availableOptions ) { option in
Text ( option.rawValue.uppercased() ) .tag ( option )
}
}
}
}
}
.labelsHidden()
.pickerStyle ( .menu )
}
}
}
#Preview {
ContentView2()
}
When I need to inspect multiple popup or dropdown elements that disappear on blur, I use this handy snippet:
Open DevTools → Console
Paste the code below and press Enter
window.addEventListener('keydown', function (event) {
if (event.key === 'F8') {
debugger;
}
});
Press F8 (or change the key) anytime to trigger debugger;
and pause execution
This makes it easy to freeze the page and inspect tricky UI elements before they vanish.
👋
You're asking a great question — and it's a very common challenge when bridging string normalization between Python and SQL-based systems like MariaDB.
Unfortunately, MariaDB collations such as utf8mb4_general_ci
are not exactly equivalent to Python's str.lower()
or str.casefold()
. While utf8mb4_general_ci
provides case-insensitive comparison, it does not handle Unicode normalization (like removing accents or special casing from some scripts), and it’s less aggressive than str.casefold()
which is meant for caseless matching across different languages and scripts.
str.lower()
only lowercases characters, but it's limited (e.g. doesn't handle German ß correctly).str.casefold()
is a more aggressive, Unicode-aware version of lower()
, intended for caseless string comparisons.utf8mb4_general_ci
is a case-insensitive collation but doesn't support Unicode normalization like NFKC
or NFKD
.Use utf8mb4_unicode_ci
or utf8mb4_0900_ai_ci
(if available):
general_ci
.str.casefold()
completely.Example:
CREATE TABLE example (
name VARCHAR(255)
) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Normalize in Python before insert:
If exact normalization (like casefold()
or unicodedata.normalize()
) is critical, consider pre-processing strings before storing them:
import unicodedata
def normalize(s):
return unicodedata.normalize('NFKC', s.casefold())
Store a normalized column: Add a second column that stores the normalized value and index it for fast equality comparison.
ALTER TABLE users ADD COLUMN name_normalized VARCHAR(255);
CREATE INDEX idx_normalized_name ON users(name_normalized);
Use generated columns (MariaDB 10.2+): With a bit of trickery (though limited to SQL functions), you might offload normalization to the DB via generated columns — but it won't replicate Python's casefold/Unicode normalization fully.
There is no MariaDB collation that is fully equivalent to str.casefold()
. Your best bet is to:
utf8mb4_unicode_ci
for better Unicode-aware comparisons than general_ci
.Hope that helps — and if anyone found a closer match for casefold()
in SQL, I’d love to hear it too!
Thank you for the outline on how to perform y-direction scrolling ! Thanks to the outline, I was able to perform x-direction scrolling...I thought that I would share it below.
<!DOCTYPE>
<html>
<head></head>
<body>
<div id="div0"></div>
<style>
#div0 {
position: absolute;
top: 0px;
display: flex;
flex-direction: row;
height: 100px;
max-width: 100px;
overflow-x: scroll;
scroll-snap-type: mandatory;
scroll-snap-points-x: repeat(100px);
scroll-snap-type: x mandatory;
}
.div_simple_ {
position: relative;
border: 1px solid gray;
background-color: seagreen;
border-radius: 5px;
width: 100px;
scroll-snap-align: start;
height: 100px;
line-height: 100px;
text-align: center;
font-size: 30px;
flex: 0 0 auto;
}
</style>
<script>
var n = 3;
create_elements();
function create_elements() {
for (var i=0; i<n; i++) {
var divElement = document.createElement("div");
divElement.setAttribute("id", "div_"+i);
divElement.setAttribute("class", "div_simple_");
divElement.innerText = i;
document.getElementById("div0").appendChild(divElement);
}
}
</script>
</body>
</html>
You can go to: vendor/maatwebsite/excel/src/Sheet.php
Go to line 675 and change your chunk size from 1000 to (x)
I just noticed something very strange: when I drag the node to the top (as in the image below), the flow starts to work correctly. Now it executes as expected, does the insert (1) when passing the node, and executes the insert (2) at the end, all correctly... I dragged the node down and up several times to be sure, and this does indeed happen: when it's at the bottom, it waits for the last node, when it's at the top, it executes instantly. Is this an expected result?
Watch the video below:
https://drive.google.com/file/d/1HMM-hv1gjyZ6EUJ9dBu-pARknkRPN3_p/view?usp=sharing
And also, I would like to add to that sometimes when you have a project structure where your controllers need to be in a different package than your SpringBootProject class annotated, you can add how many path to be scanned by Spring boot, just like
@ComponentScan({"com.path1", "com.path2"})
There have been several posts elsewhere related to this topic. I experienced the same problem.
Some of the answers appear to me to be mumbo-jumbo -- the answers may indeed work around the issue but do not explain the cause of the problem (eg deleting old SDKs, using the command line tools infrastructure, reinstalling from HomeBrew..).
The only solution which worked for me is the one outline above: using
-isystem /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1
The -isystem command line argument is not described in the man pages on my ssystem (Sequoia 16.4) although it does seem to be documented at certain places online.
Surely this is a bug in Xcode's compile? How is it that the compiler cannot figure out, when compiling C++ code, that the C++ includes need to be queries before those from the standard C library. And how is that Apple's man page is deficient?
With running docker build
and docker exec / run
very often for the same dockerfile / compose i created a mess of unused images / data. After i purged all data and started over fresh the problem was solved.
What you observe is not by design, but a bug. I filed a ticket for it: https://issues.apache.org/jira/browse/KAFKA-19539
Are you a dumb nigger? Can't you uh comprehend how to format the fucking code without extra spaces before every line? FUCK OFF BRO
You are a clown.
Anyway here's the fix, you just idk DO PROPER TABULATION AND YOU ARE ALL SET. No syntax error, bro.
Off to gulag, bro, off to gulag.
They have a paper for the Mediapipe hand detection feature that talks about the general architecture and methods used. You can see the paper here: https://arxiv.org/abs/2006.10214
The exact weights and model architecture are still not available I am sure.
Alternatively, if you find CRON syntax hard to read, I maintain an open-source library called NaturalCron that lets you write human-readable recurrence rules like every 5 minutes between 1:00pm and 5:45pm
Very help full. This save alot of my time. I even tried AI like chatgpt and grok but not fruitful. this helped alot.
The package developer here, thanks for bringing this up.
You might be able to solve the issue by specifying "binomial"
instead of binomial
. If this does not help, please share full code and data (data need not be full data, a subset of rows is fine) so I can reproduce the issue and fix.
Apache Tika uses language profiles from Optimaize Language Detector, which are based on statistical n-gram models. If Farsi isn't recognised, it typically means that:
The text sample is too short or ambiguous, or
The language profile for Farsi (fa) is missing
You could try;
use a longer or more diverse sample in Farsi
make sure you are using the OptimizeLangDetector, and
ensure you're using a recent Tika version
What you mentioned is true. This is not IntelliSense
, but IntelliCode
. The settings to disable it may be under Text Editor > C# > IntelliCode
.
I do think you can uninstall GitHub Copilot from VSCode to solve that, but probably your problem can be solved by the question below.
How do I disable partial line code suggestions in Visual Studio Code?
I just noticed something very strange: when I drag the node to the top (as in the image below), the flow starts to work correctly. Now it executes as expected, does the insert (1) when passing the node, and executes the insert (2) at the end, all correctly... I dragged the node down and up several times to be sure, and this does indeed happen: when it's at the bottom, it waits for the last node, when it's at the top, it executes instantly. Is this an expected result?
Check this shouldRevalidate
https://remix.run/docs/en/main/route/should-revalidate
We had issues as well with cart with MedusaJS + remix integration since everytime we added something to the cart it was making all the root calls.
When we added should revalidate on index route it would only revalidate the thing it wanted.
Did you manage to solve it? If so, how?
If your Python script in VS Code shows no output at all (not even print statements), it's likely a VS Code configuration issue rather than a problem in your code. First, confirm you're running the correct script by adding print("Script started")
at the top. Try running the script manually in the terminal (python DBCon.py
) to see if output appears—if it does, the issue is with VS Code's execution settings. Also, ensure the correct Python interpreter is selected (Ctrl+Shift+P
→ "Python: Select Interpreter") and that mysql-connector-python
is installed. Avoid running in the interactive window by disabling related settings, and make sure to look at the TERMINAL, not the OUTPUT panel, for results.
On Linux it is possible to create a QPA plugin (platform plugin) that uses the Mesa library to do software rendering. The coding is not easy, but you only need to do it once, and it works for all your apps. I had to build Qt from source and write the QPA based on one of the existing plugins. It then renders to a memory buffer you can save as an image - very useful for testing.
I'd recommend you to implement lazy loading to reduce bundle size, also check if your files are gzipped and, if it's not, gzip them.
Also, are you sure all 150 NgModules are needed? Perhaps some could be eliminated without breaking your application.
Check this blog post.
To force execution order, you must connect Node 1 directly to Node 2 or create a dependency chain, like this:
Option 1: Direct Sequential Link Connect InsertMsgRecebida → InsertMsgEnviada, even if they are logically unrelated. This ensures n8n waits for the first insert to complete before continuing.
Option 2: Merge with Wait Use a Merge node in "Wait" mode to join both paths.
Flow: InsertMsgRecebida → Merge InsertMsgEnviada → Merge This ensures both writes must complete before the flow ends.
Option 3: Use a “Set” Dummy Chain If a direct link doesn’t make sense, use a dummy Set node to bridge: InsertMsgRecebida → Set → Merge → InsertMsgEnviada
Why "Waiting for end of stream" Happens in MySQL Node? This message is often seen when:
The MySQL node is waiting for the database connection to finish the current insert/update. This can also appear if there’s backpressure from async branches, especially when one flow is writing and another is blocked. Making sure writes are sequential and not parallel avoids this.
Final Flow Suggestion WhatsApp Trigger -> InsertMsgRecebida (Node 1) -> ... (rest of your flow) ↓-> InsertMsgEnviada (Node 2) Or use a Merge (Wait) node to enforce sync.
A thing I did was pickled the BM25Retriever.from_documents() at indexing time not run time. I'm not sure if it's the right decision, but let me know!
the Question was Intersting and was logical thinking Question.
To solve this simply make some css and import it. This should do the trick
.milkdown .milkdown-image-block > .image-wrapper {
margin: 0px 10px;
}
I think an effective way, is to match all attributes regardless of quoting style and allow inner quotes within scriptlets.
private static final Pattern ATTRIBUTES_PATTERN = Pattern.compile(
"(\\w+)\\s*=\\s*(\"((?:[^\"\\\\]|\\\\.)*?)\"|'((?:[^'\\\\]|\\\\.)*?)')"
);
I encountered a client-side error while using Uppy for chunked uploads. The error was caused by a misconfiguration of the chunk size (chunkSize
), which prevented the upload from working correctly. So Upload-Length is always 0
from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
from llama_index.agent import FunctionAgent # ✅ Make sure this is the correct import based on your LlamaIndex version
# Stub: Replace with actual tool and LLM
ferramenta = ... # your tool, e.g. a function tool or ToolSpec
llm = ... # your LLM instance, e.g. from OpenAI or HuggingFace
# Initialize debug handler
llama_debug = LlamaDebugHandler()
callback_manager = CallbackManager([llama_debug])
# Initialize the agent
agent = FunctionAgent(
tools=[ferramenta],
llm=llm,
system_prompt=(
"Você é um assistente muito prestativo que irá me ajudar a "
"responder minhas questões sobre engajamento."
),
callback_manager=callback_manager,
allow_parallel_tool_calls=True,
)
Your model maybe suffering from an insufficient training sample
I think where you may be having a problem is the dataset structure for training. Check again how you have prepared the dataset for training. There is an issue with the tokenized function
Then I don't think you should use the global tokenizer -- why not pass the tokenizer explicitly
A bit of an old thread, but I had this happen when I initialised the variable with false
let tab_clicked = false;
tab_clicked = document.querySelector("button");
if (tab_clicked) {
// DID NOT WORK
}
let tab_clicked = '';
tab_clicked = document.querySelector("button");
if (tab_clicked !== '') {
// DID WORK :D
}
Just a couple of thoughts:
Did you try setting the QPainter::TextAntialiasing render hint?
Also, sometimes when using HDPI (high res screen), the reported window width & height are not actual pixels; you can check this with devicePixelRatio() and use the value to scale the size of your pixmap.
(Sorry I can't post a comment). I'm really confused by what you're trying to achieve here. Is there a reason you have people with no follow-up time in a joint model? If they can't contribute to the longitudinal model or the follow-up time, they shouldn't be included in the analysis. Additionally, your IDs don't seem to match between the survival and longitudinal submodel, so JM won't be able to link the models together.
First, cross-verify your Firebase phone authentication implementation. It's possible that multiple attempts may have temporarily blocked requests from your device. I recommend starting with a dummy phone number added under Authentication → Phone Numbers
in the Firebase console. Once that works, try with your real mobile number. Also, make sure to wrap your code in a try-catch
block to catch any potential errors — print the error and debug accordingly.
Use dynamic SQL or IF. Using a variable like that will mess up your query plans.
Try this maybe it solve your problem and try to use <!/p> in last
<p class="gameLineupBottom">
<span style='color: yellow;'>!donate </span>
</p></div>
Hi Malcolm and Neon Man,
I’m currently facing the same issue when trying to publish offers on eBay using the Inventory API.
The error messages are related to the Return policy (ReturnsAcceptedOption, ReturnsWithinOption, ShippingCostPaidByOption) and Fulfillment policy, just like what you shared above.
I’ve tried adding a delay (even 3–5 seconds) between the inventory item, offer creation, and publish steps — but the error still persists.
Were you able to resolve this issue in the end?
If so, could you please share what worked for you?
Really appreciate any help!
Thanks 🙏