I found the reason: tags order matters.
Correct manifest:
Broken manifest:
As you can see I swapped two tags: Icon and License
Thank you ms for wasting 2 hours of my life.
I copy a text formula containing the control character CHAR(10) from a source workbook and 'Paste Values' same to a cell in a destination workbook. The intermediate result is a text string a) incorporating factors such as concatenation, b) but without effect being given the control character, and c) an annoying imposition of 'Word Wrap' on the destination cell. Apparent failure. But at this point the same formula result appears in the Formula Bar with effect given the control character. So Step 2 is to click anywhere in the Formula Bar, then CTRL-A and CTRL-C to select all and re-copy the results to the Clipboard. Step-3 is to again 'Paste Values' the upgraded results to the destination cell (and turn off the pesky Word Wrap feature). It is a pain double-copying and double-pasting and fiddling with Word Wrap, but it works. Excel seems to only respect the copied control character if its formula appears in the Formula Bar. No, one cannot paste the source formula directly into the Formula Bar of the destination sheet - one must double-paste.
Check as one or more of these causes could be at play:
Mismatched Redirect URI
The redirect_uri you send in your /authorize URL must exactly match the redirect URL registered in the Twitter/X Developer Portal — including scheme, domain/IP, port, and trailing slashes.
If you registered http://127.0.0.1/ but your request uses http://127.0.0.1 (no trailing slash), or vice versa, it will fail with 403.
Scopes vs App permissions
Even if your app permissions in the portal say “Read and Write,” if your /authorize request includes scopes not allowed by your app config, it can fail.
Your scopes look correct (tweet.write tweet.read users.read) if your app was approved for tweet posting.
Client ID or secret invalid/mismatched
Double-check that your client_id exactly matches what’s shown for your app in the developer portal.
Make sure you’re using your app’s OAuth2 Client ID, not your API Key.
Incorrect endpoint URL
The correct base domains for X/Twitter API calls are:
API requests:
(Still api.twitter.com as they have not migrated API calls to an api.x.com domain.)
OAuth2 authorization:
https://twitter.com/i/oauth2/authorize
The domain api.x.com does not exist or serve any public API endpoints which is why your browser immediately hit a 403 Forbidden: that hostname either routes nowhere meaningful or returns an error by default.
I'm currently facing the same thing with nowscore. Wp-admin opens for main site but keeps redirecting for the subsite.
You have to update version of metadata.json every time and pass same identifier and packname to send method which create in native module.
just need to update version of ---> metadata.json
By doing this your pack is definitely updated with same identifier but its not directly updated to whatsApp when you reload the whatsApp at that time you can see the updated pack in whatsApp
Hola Que es esto? 634202706172033
this is an example what I was saying when you install an add-in from MS add-in Store it will ask you to pin it see the examples screenshots
unpinned example
when you will pin add-in and I think this is what you want add-in icon show with the email. After Publishing your add-in on MS add-in Store your add-in will have option to pin
Challenge
0 Solves
aifiyan rlb t{ala tto sna}icrwsiie
This answer is largely dependent on @nick-bull 's and https://stackoverflow.com/a/48058708/3809427 but additional details are big so I added new answer.
First, normalization is needed for special characters. e.g. "𝓑𝓘𝓖 𝓛𝓞𝓥𝓔 ㌔" -> "BIG LOVE キロ"
text = Normalizer.normalize(text, Normalizer.Form.NFKC);
And, removing "VARIATION SELECTOR" is needed.
str = str.replaceAll("[\uFE00-\uFE0F\\x{E0100}-\\x{E01EF}\u0023\u002A\u0030-\u0039\u20e3]+", "")
Combine them all,
//This is needed to output Unicode correctly by `System.out.println()``. This is not related directly to answer but needed to show result.
try {
System.setOut(new PrintStream(new FileOutputStream(FileDescriptor.out), true, "UTF-8"));
} catch (UnsupportedEncodingException e) {
throw new InternalError("VM does not support mandatory encoding UTF-8");
}
final String VARIATION_SELECTORS = "["
//Variation Selectors https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_block)
+"\uFE00-\uFE0F"
//Variation Selectors Supplement https://en.wikipedia.org/wiki/Variation_Selectors_Supplement
+"\\x{E0100}-\\x{E01EF}"
//https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)#Variants
//Basic Latin variants
+"\u0023\u002A\u0030-\u0039"
// COMBINING ENCLOSING KEYCAP
+"\u20e3"
+"]+";
String example ="\uD835\uDCD1\uD835\uDCD8\uD835\uDCD6 \uD835\uDCDB\uD835\uDCDE\uD835\uDCE5\uD835\uDCD4 ㌔ hello world _# 皆さん、こんにちは! 私はジョンと申します。🔥 !!\uFE0F!!\uFE0F!!\uFE0Fa⃣";
System.out.println(example);
//Main
var text = Normalizer.normalize(example, Normalizer.Form.NFKC)
//This is originalte from Nick Bull
.replaceAll("[^\\p{L}\\p{M}\\p{N}\\p{P}\\p{Z}\\p{Cf}\\p{Cs}\\s]+", " ")
.replaceAll(VARIATION_SELECTORS, " ")
//reduce consecutive spaces to a single space and trim
.replaceAll(" {2,}", " ").trim();
System.out.println(text);
// Output:
// "𝓑𝓘𝓖 𝓛𝓞𝓥𝓔 ㌔ hello world _# 皆さん、こんにちは! 私はジョンと申します。🔥 !!️!!️!!️a⃣"
// "BIG LOVE キロ hello world _ 皆さん、こんにちは! 私はジョンと申します。 !! !! !! a"
What you should check before, during and after your operation:
EXPLAIN ANALYZE the SELECT alone first. Does it use efficient indexes or sequential scans?
Monitor the disk I/O and WAL generation with tools like:-
pg_stat_bgwriter (checkpoint and buffer usage stats)
pg_stat_io (in PG16+, detailed I/O statistics)
OS-level tools: iostat, vmstat, or atop.
Watch transaction duration. Long transactions can block vacuum.
Use pg_stat_activity to see if your query is causing waits or blocking others.
Look for lock contention in pg_locks.
If you want to reduce impact on other users:
Instead of a single huge INSERT INTO ... SELECT, break it into chunks, like this:
INSERT INTO target_table (col1, col2, ...)
SELECT col1, col2, ...
FROM source_table
WHERE some_condition
ORDER BY some_column
LIMIT 10000
OFFSET N;
Then loop your client or script to step N += batch size.
This shortens each transaction and avoids holding locks for too long.
Use pg_background or parallel job frameworks
Run batches asynchronously or schedule during low-traffic times.
Consider CREATE UNLOGGED TABLE for temp use
If you just need intermediate storage and can afford data loss on crash.
Adjust maintenance_work_mem & work_mem
Increase these parameters before the operation to improve performance of sorts or index creation (but only if you have enough RAM).
Run during maintenance windows
Especially on OLTP systems, to avoid impacting peak hours.
Monitor system resources
Before you start, check CPU, memory, and disk throughput headroom. On production, run on a staging system first if possible.
Yes, useContext()
has a hidden dependency trade off. In regards to why people use it; from my experience people are mostly advised to depend on useContext
for small projects, if the the trade off for using it will have a serious adverse effect you should consider other options that accounts for that trade-off.
It turned out I was being misled by a shortage of memory
That is intentional.
The second return value is [3, undefined] because index 0 is 3 and index 1 is undefined since it's reading beyond the end of the right side value (which is treated as an iterable.)
It it had returned a non-iterable value instead of an array, it would have errorred.
I suggest you avoid destructuring in the for..of. That way you can check the return value and if it's an array, check its length to make sure it's 2. Then do the destructuring step afterward.
is a classic Cross-Origin Frame Access issue. It happens because your injected iframe's src
points to a different origin (e.g. https://www.nytimes.com
), and the browser’s same-origin policy forbids scripts on the parent page from accessing the iframe's DOM.
If you inject an <iframe>
into a page like https://www.nytimes.com
and set its src
to any URL from a different origin, your extension’s content script can’t directly access or modify the iframe's document.
Browsers enforce same-origin policy to prevent malicious scripts from reading or manipulating cross-origin iframe content.
Option 1: Use an iframe with a source from your extension (same origin)
src
to an external website (like https://www.nytimes.com
), create an HTML file inside your extension (e.g. panel.html
) and set the iframe’s src
to that file using the extension URL:js
iframe.src = chrome.runtime.getURL("panel.html");
Option 2: Communicate via postMessage between iframe and content script
If you absolutely need to load an external page inside the iframe (or cross-origin page), you cannot directly access its DOM.
Instead, use window.postMessage to send messages between the parent and iframe if the iframe supports it (requires cooperation from iframe page).
This generally doesn’t work with external sites like nytimes.com
unless you control the iframe content.
Option 3: Build your UI fully inside your extension
Instead of embedding an external site, create your entire UI (your Wordle panel) inside the extension iframe.
Populate it dynamically from your extension scripts, then inject the iframe into the page.
js
// Instead of this:// iframe.src = "https://www.nytimes.com"; // Cross-origin iframe, blocked access // Do this: iframe.src = chrome.runtime.getURL("panel.html"); // Your extension page iframe.addEventListener("load", () => { const panelDoc = iframe.contentDocument || iframe.contentWindow.document; // Now safe to modify DOM });
"panel.html"
is declared in your manifest.json
under web_accessible_resources
:json
"web_accessible_resources": [ { "resources": ["panel.html"], "matches": ["<all_urls>"] } ]
Use contentWindow
only after the iframe is loaded (load
event).
If you want to pass data (like user stats) into the iframe, you can either:
Inject the data via query parameters in the URL and let the iframe script parse them.
Use iframe.contentWindow.postMessage()
to send data after the iframe loads.
Or have the iframe pull data from chrome.storage
or background script.
Config for kafka autoconfigure is now present in spring-autoconfigure-metadata.properties post spring boot 3.x
thanks for sharing those different options — very helpful to see multiple angles explored.
I’m also looking into this for Microsoft Fabric Warehouse, and from what I can tell, a lot of the traditional approaches (like using. sys.objects
, sys.partitions
, or even Don’tI
t behaves quite the same way in Fabric compared to classic Azure SQL Data Warehouse or SQL Server.
The dynamic SQL route using sp_executesql
It is promising, but as SRP mentioned, performance becomes a bottleneck with thousands of tables. Ideally, Fabric should expose a system view similar to But
t so far, it looks like that's not available.
If anyone has insights into whether there's an equivalent metadata view in Fabric (like a DMV or any internal table that tracks row counts), that could be a cleaner way forward.
It would be great to hear from anyone who has tested this specifically on Fabric.
For quick text comparisons, I’ve found https://diffsnap.com quite handy. No need to install anything — just paste and compare.
I am also facing similar issue, but even after adding schema.table format still not working. getting
psycopg2.errors.UndefinedTable: relation "nyc_taxi.trips" does not exist.
any other solution or workaround?
You will also have to mac sure Android studio has access to the local network by following the steps below :
Go to Mac Settings > Privacy & Security > Local Network, then enable it for Android Studio.
Sub DeleteAllPicturesInWorkbook()
Dim ws As Worksheet
Dim shp As Shape
For Each ws In ThisWorkbook.Worksheets
For Each shp In ws.Shapes
shp.Delete
Next shp
Next ws
End Sub
The power bi and I have to be there in about five or so to pick you guys up so I will just meet you there at
While defining the
interface IMGNAMES {
Image1: IMGPROP;
}
change it to
interface IMGNAMES {
[key: string]: IMGPROP;
}
Since you're using image
property name different for each image (i.e. Image1, Image2 etc)
The only calculation is the one you made in your post and you are right that the first thing that came out was that the second one is
i am also getting that same sympton .. keeps thinking does nothing ..
Did you end up solving it? Just like you I am trying to use flutter 3.29 and I was getting this exact error.
I was upgrading a very old project from flutter 3.0.2 to a newer version and decided to upgrade to the latest version possible (3.29.0 at this time)
What worked for me was removing this form the build.gradle:
flutter {
source '../..'
}
and then going directly to the dependency (Geolocator 5.0.1+1 in my case) folder android/build.gradle and directly giving it version numbers:
android {
if (project.android.hasProperty("namespace")) {
namespace("com.baseflow.geolocator")
}
compileSdkVersion 34
// compileSdk flutter.compileSdkVersion
defaultConfig {
// minSdkVersion flutter.minSdkVersion
minSdkVersion 26
}
lintOptions {
disable 'InvalidPackage'
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
What the error says is that it isn't recognizing "flutter" that's because of the flutter.xversion type of code. It will probably be fixed in the dependency itself in the future, but that's what worked for me.
Did you ever have any success getting this working? I've tried a million different ways and the search bar still sits firmly at the top of my List{}
You should give the agents permissions in Five9 admin to allow them to update contact records, then instead of using a script - add contact variables in five9 admin under contact and fields, then add those variables to a campaign using the campaign profiles under the layout tab and make sure that the read only box is unchecked. Then, when an agent gets a call, they will be able to write to those variable fields which they will be able to find in the interaction tab on their five9 agent desktop, and when the agent dispositions the call, the contact record will be updated with their input from the variable fields
did you solve the problem?
And a question, in the step two, how do you do the request to https://graph.facebook.com/v20.0/upload:\<UPLOAD_SESSION_ID> ?
do you use javascript fectch?
I've been triyng to use fetch but I recieve CORS erros.
My mistake was in pubspec.yaml, i put the plugin configuration in:
dev_dependencies:
instead of:
dependencies:
Ive accidently deleted my default vpc, after I closed everything and waited for 10 minutes, i have gotten all the default ones back up. I guess AWS patched it and gives a default.
So, don't quote me on this (also correct me if I'm wrong), but what I believe may be happening, is you create your task as a child task from the Main actor, so the work is being performed on the main actor, and then you lock the main actor with a semaphore, causing a deadlock. You could try using Task.detached, but it's not a good solution, because, as it was rightfully pointed out to you in the comments, don't mix semaphores with tasks. Not only it defeats the whole purpose, but it also may cause unintended consequences. Well, it already has, hasn't it? :)
The better approach would be to go all in on Swift Concurrency, and just do the work you need to do right within the Task. If you need to do it on another thread / actor, you can do it by creating a nested Task within your Task. But if it's already running on the Main Actor (again, better check, I'm not sure), can't you simply use code like this?
func didClickStart() {
scene.presentLoadingScreen()
var models: [String:Entity]!
Task {
models = await loadEntitiesInParallel(fns: entities, tr: tr)
scene.presentGame(models: models)
// Alternatively, if needed:
//MainActor.run { scene.presentGame(models: models) }
}
}
Turns out, PyTorch Lightning had nothing to do with this at all. Even just a normal vanilla PyTorch loop was causing issues terminating the program. An os._exit(0)
works, but the more permanent solution I found was to update my PyTorch installation to the nightly build.
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
At the time of writing, this installs a dev-build of the next PyTorch version of 2.8, which seems to have solved the issue. I do not know the root of the issue, but it's possible that the new M4 Max chip + macOS 15.5 probably caused some bugs to surface with how PyTorch terminates multiprocessing.
Open the android module and then run ... for me it works..!!!!
https://github.com/electron/electron/issues/46882
Found this open issue :) Downgrade to 35.2.1 to avoid this issue.
Let's first try to understand module map file - the most important part in swift ecosystem when interpolating with C/C++ languages if you don't use a bridging header.
A module map(module.modulemap) is a small text file understood by Clang. It tells the compiler how a set of C, Objective-C, or C++ header files should be grouped into a Clang module and which of these headers make up that module's public interface.
Thanks to the module map, Clang(and therefoew swift, which embeds Clang under the hood) can:
Put differently, the module map is to Clang modules what a Packet.swift manifest is to Swift packages: a manifest that explains what belongs to the module and how to expose it.
Let's look at a typical framework bundle:
libavformat.xcframework/
├── Info.plist
└── macos-arm64_x86_64
└── libavformat.framework
├── Headers -> Versions/Current/Headers
├── libavformat -> Versions/Current/libavformat
├── Modules
│ └── module.modulemap // That is our module map, it guides swift to find your symbols.
├── Resources -> Versions/Current/Resources
└── Versions
├── A
│ ├── Headers
│ │ ├── avformat.h
│ │ ├── avio.h
│ │ ├── config.h
│ │ ├── os_support.h
│ │ ├── version_major.h
│ │ └── version.h
│ ├── libavformat
│ └── Resources
│ └── Info.plist
└── Current -> A
11 directories, 11 files
And this is our module map look like
framework module libavformat [system] {
umbrella "." // All Headers are exported as public symbols in swift, except for os_support.h
exclude header "os_support.h"
export *
}
Check the link I provided earlier, alll though the swift 5.9 interpolates with C++ directly. The underlying modulemap mechanism hasn't changed, quote this from the original post: In order for Swift to import a Clang module, it needs to find a
module.modulemap` file that describes how a collection of C++ headers maps to a Clang module.
So, that means automatically or mannually, we have to make sure that modulemap file exists.
Think about these senarios we usually compile our C++ library:
How the C++ code is Compiled | Who creates the module map? | When you have to author one manually |
---|---|---|
Xcode framework / target (You let xcode build your C++ dependency) | Xcode auto-generates it | Rarely – only if you need custom requires, link, or want to hide headers |
Swift Package Manager(You let SPM build your C++ dependency) | SPM auto-generates it when it finds an umbrella header in include/ | If you don’t provide an umbrella header or you need finer control (multiple sub-modules, add link, exclude heavy templates, hide some unused symbols, etc.) |
Plain .c/.cpp + headers in some folder (no framework, no SPM target, You use cmake or other build system) | Nobody | You must supply a module.modulemap, then add the directory to SWIFT_INCLUDE_PATHS / pass -I so Swift can find it |
So to clarify your questions:
When does building a framework benefit from having a modulemap file in its build settings?
When does building a Swift project that imports a objective-c++ framework benefit from that framework having a modulemap file?
Module map gives these advantages over a bridging header when you use Swift in your xcode project
Advantage | Bridging header | Module map |
---|---|---|
1. Pre-compiled representation (PCM) so headers aren’t re-parsed for every Swift file | NO, Every Swift file reparses the header text(Althrough it has cache as pre-compiled header (PCH), but reopen and deserialize the PCH happens for every swift file) | YES Parsed once → cached PCM → big compile-time savings, especially for large frameworks. |
2. Stable logical name you can import MyLib from Swift & Obj-C/C++ | Partial – Swift can see symbols via the bridging header but the module name is your target name (import MyApp) rather than the library’s own. | YES Explicit, reusable namespace (import MyLibCore, import MyLibExtras, etc.). |
3. Selective exposure / hiding of headers | NO All included headers become public; no sub-modules. | YES export *, exclude header.h, sub-modules (module Core {…}), etc. |
4. Automatic linker flags (link "z", link "CoolC++Lib") | NO, You must add libraries to “Link Binary With Libraries” or other-linker-flags yourself. | YES Link directives live in the map, so SwiftPM / Xcode pick them up automatically. |
5. Better incremental builds & parallelism | NO, Any change to the bridging header forces all Swift files to rebuild. | YES PCM change fingerprints allow fine-grained invalidation; Swift files compile in parallel against the cached module. |
But if your project's main language is OC, your project totally work perfectly without a module map.
It is not mandatory to have a module map, you can still stick to your old objective-c++ wrappers solution.
Yes, you can tune/add the modulemap file anytime. But 3 things to have i mind:
Shift + CMD +K
, so stale caches aren’t reused.No, Embedding (copying the .framework into the app bundle at build time) is purely a link-and-package concern. The module map is consumed before that, while compiling your Swift sources. Whether you later run Embed & Sign or link it from a system path has no impact on the need for or contents of module.modulemap.
No, static library or shared dynamic library only decides how the object code is linked and loaded at runtime, while module map happens at compile time.
As of December 2024, the Smart home Actions migrated to Google Home Developer Console.
Here are the steps to create a new project in Google Home Developer Console for a Cloud-to-cloud integration:
Go to console.home.google.com/projects in your web browser.
On the "Welcome to the Developer Console" page, click the "Create a project" button.
If you're creating a brand new project, you'll be prompted to enter a project name. After creating the project, you will be taken to the "Project home" dashboard for your newly created project.
Under the "Cloud-to-cloud" section, click on "+ Add cloud-to-cloud integration".
This will initiate the process of setting up your Cloud-to-cloud integration within your new project in the Google Home Developer Console.
On Linux running KDE Plasma 6.4.1, the soluton on my machine was to set "Adaptive Sync" to "Always".
I still get some jank (micro-stutters) every now and then in the Chrome browser, especially on heavy websites, but it's mostly smooth and responsive.
Ive been having a similar problem. This might help, though its not definitive. Sadly I havent found a permanent solution as the problem seems to be with the packages/compatibilities.
So, I have program I wrote in pycharm that uses pandas-ta and has worked perfectly for the past year or so... until I tried to improve the code, using a new project with a new interpreter.
I downloaded all the same librares, but keep getting an error similar to yours:
UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
from pkg_resources import get_distribution, DistributionNotFound
I cant find a solution, BUT, when checking google/chatgpt, I kept getting suggestions to downgrade libraries.
On checking, I noticed that although the interpreters use the same packages, and python 3.12, the versions of the packages were different.
In the code that works with pandas_ta, numpy is 1.26.4 and pandas is 2.2.3.
In the new code, thats giving me the error, numpy is 2.3.1 and pandas is 2.3.0.
Its not ideal (and from the date of the post a bit late), but maybe if you downgrade the pandas and/or numpy to these versions, then install pandas_ta, it might work?
create a visual bar ammo, able to do the following:
reduces when the player shoots (e.g., presses space)
Gradually over time, regenerate bullets
the update must be gradually displayed. Ensure the shooting option has a dedicated key in the program. also install ursina
# shot values
max_shot = 12
recent_shotss = max_shots
recret_rate = 2 # shots per second
recret_purse = 2.5 # seconds after shooting before recret starts
last_shot_time = 0
recret_timer = 0
You will need to manually download and build or integrate the MsQuic SDK into your project.
I know this question is OLD, but I I stumbled on this problem today and I want to share how I resolved it.
My organization has a standard development environment setup built on top of docker that uses <app-name>.localhost for the local apps, so the apps always are in "the root" of the web address.
To resolve this, I patched this setup to use a custom callback URL (And I registered it i the Google Console for the app):
http://localhost/login-with-google.php/<app-name>/<app-url-callback-path>
In this PHP file, I assembled the original intended URL based on the <app-name> and appended the query string provided by Google and redirected to the original callback URL.
Worked like a charm!
You have to install OPOS CCO...
The first answer here did not help me, possibly because they were talking about this button over an image in their "first option," whereas in my case it's just there because of a copy/paste action and no image at all. The second option did apply to me, but there was no Convert option when I Clicked File.
So, I just came up with a hack: just drag this box off to the side, past the edge of the document, and then it's just not visible and no longer annoying.
Go to terminal and ask Copolit.
Rails now comes with:
bin/rails stats
If you’re working with next, try renaming your .env to .env.local, any new errors?
I ran into same issue. Our requirement was to complete the transaction without UI flow. Did you find any solution ?
I used VPN and the problem was solved. I you have a VPN installed try use it
# Convert image to grayscale again for contour detection
gray = cv2.cvtColor(image_np, cv2.COLOR_RGB2GRAY)
# Apply threshold to get binary image
_, thresh = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)
# Find contours
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Create a black mask and draw the contours
mask = np.zeros_like(image_np)
cv2.drawContours(mask, contours, -1, (255, 255, 255), thickness=cv2.FILLED)
# Apply mask to extract the figure
result = cv2.bitwise_and(image_np, mask)
# Convert white background to transparent
rgba_image = cv2.cvtColor(result, cv2.COLOR_RGB2RGBA)
rgba_image[np.all(mask == 0, axis=-1)] = [0, 0, 0, 0]
# Save the final image with transparency
output_image = Image.fromarray(rgba_image)
output_path = "/mnt/data/mascara_isolado_transparente.png"
output_image.save(output_path)
output_path
Mine started working after closing and reopening GitExtensions
This free app URL Shortcuts for Google Drive seems to fill an obvious gap:
you can create shortcuts in this add-on, or
use .url, .desktop and .webloc shortcuts created on desktop computers and uploaded/synced with Google Drive and
click to open these links in a new browser tab.
I'm Toppi! I can sing with my best friends!
Apparently not.
FEAT_LSE include LD/ST<OP>
, which are guaranteed to succeed and progress; whereas exclusive load/store need to loop (ephemerally) to achieve the same effect in case concurrent exclusive store clears the monitor.
After further investigation I found that spacy-layout release v0.0.11 introduced the ability to pass a DoclingDocument
to spaCyLayout.__call__
. For reference, the simple way to process a document with Docling and then pass it to spacy-layout would be something like the following:
import spacy
from spacy_layout import spaCyLayout
from docling.document_converter import DocumentConverter
# Setup spaCy pipeline
nlp = spacy.load("en_core_web_sm")
layout = spaCyLayout(nlp)
# Convert a document with Docling
source = "./starcraft.pdf"
converter = DocumentConverter()
docling_result = converter.convert(source)
# Verify Docling conversion to markdown
print(docling_result.document.export_to_markdown())
# Pass Docling document to spacy-layout
doc = layout(docling_result.document)
# Examine spacy-layout spans
for span in doc.spans["layout"]:
# Document section and token and character offsets into the text
print(f"{span.label_}: {span.text}")
you need to use a library to make HTTP requests. Popular choices include axios, node-fetch, or the built-in https module in Node.js. You also need to handle the asynchronous nature of these requests in your Mocha test.
If you need Code Example how to do this, Let me know and i will provide corrected script
You are must enter number in international format
val telegramChatIntent =
Intent(Intent.ACTION_VIEW, Uri.parse("https://t.me/+79999999999"))
telegramChatIntent.setPackage("org.telegram.messenger")
startActivity(telegramChatIntent)
According to this page https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_C.html#aliasing-rules under 6.4.3 Explicit Conversions, you should be able to use convert_int8() and sister functions to convert vector types to other vector types with the same number of elements.
It is strange that ignoring small details will fail the configuration. My environment file had url as localhost:4200
. I updated it to http://localhost:4200
and it worked.
On Win11, I ran the emulator as described by Mohsen and found out that my system was missing MSVCP140.dll. Installing "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019" aka "vc_redist.x64.exe" solved my problem.
This GTK 3 setup: Procedure entry point deflateSetHeader could not be located DLL libgio-2.0.0.dll suggests that the problem is in the zlib1.dll. You likely have several versions of it with the wrong one preceding in the Path environment variable.
Microsoft Edge is different and/or it is an older version. Some apps/programs use the same ToolTip style, e.g.:
O-browser:
Logo: enter image description here
ToolTip: enter image description here
X Browser:
Logo: enter image description here
ToolTip: enter image description here
Latest Edge ToolTip: enter image description here
You just call the RNG’s own random method.
import numpy as np
rng = np.random.default_rng(1949)
selection = rng.random((N_fixed_points, 2))
To change the default shell in Kali Linux from Zsh (the current default) back to Bash, you can use the chsh
command. Open a terminal and run chsh -s /bin/bash
. You will be prompted for your password. After entering it, log out and log back in for the changes to take effe
class MyWidget extends StatefulWidget {
@override
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
final FocusNode myFocusNode = FocusNode();
final TextEditingController controller = TextEditingController();
@override
void dispose() {
myFocusNode.dispose();
controller.dispose();
super.dispose();
}
void _unfocusTextField() {
myFocusNode.unfocus();
}
@override
Widget build(BuildContext context) {
return Column(
children: [
TextField(
controller: controller,
focusNode: myFocusNode,
),
ElevatedButton(
onPressed: _unfocusTextField,
child: Text("Unfocus"),
),
],
);
}
}
Also wanted to mention that if you'd like to become a software tester i recommend this bootcamp astorialab.com
The issue still persists with EF Core as version 7.x.x. I decided to generate a separate context class for each schema under a different namespace. This way, I have all tables mapped to their entity classes. Since I had to generate the context classes with separate commands, the navigation properties and relations are not set automatically, but that's okay. I can still join tables by specifying the column to join on in the query.
If you arrive at this stackoverflow question and your server appears to be returning the correct headers, you may have a silly issue: Chrome Dev Tools 'Disable cache' is likely interfering with your test.
You are likely unintentionally bypassing the very caching that you are trying to test by opening the Chrome network tab.
Given that pretty much the only way you test OPTIONS request behavior is in the Dev Tools console of your browser (how else would you know that you were making the options requests?) there's an important "gotcha" here:
If you have the 'Disable cache' checkbox checked at the very top of the network tab then the OPTIONS cache will be completely ignored.
This makes sense, but is unintuitive since as a dev you normally always have 'Disable cache' checked when in the network tab since you don't want stuff you're debugging cached pretty much ever. But indeed, that checkbox will also bypass the OPTIONS cache, not just assets caches you usually think about, so even if your server is set up correctly the browser will just request options on every single request until you uncheck that box.
Hope this helps someone!
Tangential rant, this is poor design by the chrome devtools team, OPTIONS should get special treatment and its own checkbox or something, as well as the ability to hide them specifically from cluttering up your request list when you're trying to debug actual requests. Very frustrating. Having no way to filter "requests I make" from "requests the browser makes as protocol overhead" in a debug tool is silly. Yes, you can INVERT a method:OPTIONS filter to filter them out, but then you can't use the filter for anything else, which creates a worse clutter problem to deal with when zeroing in on a problem... :)
When using MiniBatchKMeans with BERTopic, it’s common for some data to not be assigned a topic due to
High Dimensionality of Embeddings: Embeddings may be too sparse or not well-clustered.
Noise in Data: Some data points might not clearly belong to any cluster.
How to solve this Issue:
Tune n_clusters in MiniBatchKMeans:
• Start by testing different values for n_clusters. If it’s too low, some topics may merge, and if it’s too high, many data points may be left unclustered.
from sklearn.cluster import MiniBatchKMeans
cluster_model = MiniBatchKMeans(n_clusters=50, random_state=42)
topic_model = BERTopic(embedding_model="all-MiniLM-L6-v2", hdbscan_model=cluster_model)
Use a Different Clustering Algorithm:
BERTopic allows for other clustering models. For instance, using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) is often more flexible.
Example
from hdbscan import HDBSCAN
cluster_model = HDBSCAN(min_cluster_size=10, metric='euclidean', cluster_selection_method='eom')
topic_model = BERTopic(embedding_model="all-MiniLM-L6-v2", hdbscan_model=cluster_model)
Reduce Dimensionality Before Clustering:
Use dimensionality reduction (e.g., UMAP) to make the data more clusterable:
from umap import UMAP
umap_model = UMAP(n_neighbors=15, n_components=5, metric='cosine')
topic_model = BERTopic(embedding_model="all-MiniLM-L6-v2", umap_model=umap_model)
Analyze Unassigned Data:
Check what makes the unassigned data different. These may be outliers or too generic to form a unique topic.
Example:
unassigned_data = [doc for doc, topic in zip(documents, topics) if topic == -1]
Increase Training Data Size:
If your dataset is too small, clustering might struggle to find consistent patterns.
Adjust BERTopic Parameters: min_topic_size: Set a smaller value to allow smaller topics to form.
• n_gram_range: Experiment with different n-gram ranges in BERTopic.
topic_model = BERTopic(n_gram_range=(1, 3), min_topic_size=5)
Refine Preprocessing:
Ensure text data is clean, normalized, and free of irrelevant tokens or stopwords.
Debugging:
•After making changes, check how many data points are still unclustered:
unclustered_count = sum(1 for t in topics if t == -1)
print(f"Unclustered points: {unclustered_count}")
You don't need to worry about it just add a transparent color border or the border color same as the background it will fix your problem.
It seems the Google Maps SDK is designed to be used from the client (on the device), but security comes from restrictions you apply from the Google Cloud Console:
You can say: "Only allow this key to be used if the call comes from an app with package name X and SHA-1 Y."
This way, even if someone sees your key, they won't be able to use it in their own app.
I had the same issue as you and referred to this article: Setting up Swagger (ASP.NET Core) using the Authorization headers (Bearer)
SwaggerGen enables a button called Authorize to exist in swagger docs. Once you set the token you can read it in code by putting this line in a controller action result.
var authToken = this.HttpContext.Request.Headers["Authorization"].ToString();
What matters most is not 3NF itself, but the reasons behind normalization. Its main purpose is to prevent update anomalies, which normalization accomplishes by storing data in a single location. Conversely, with intentional denormalization, this is managed by updating code across multiple places within a single transaction. Both approaches are acceptable.
Normalization is critical for relational databases and SQL, which were invented to allow non-programmer end-users to access data easily. Therefore, the database must ensure consistency even when a user performs a single update. However, when databases are used by programmed code that has been reviewed and tested, you can duplicate data for performance. This is where MongoDB's document model shifts more responsibility to the developer, leading to improved performance.
Working after removed the older version and upgraded to new version.
sudo apt-get remove docker-compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
docker-compose --version
In case someone else experiences this issues:
In the lower left select Dart Analysis (where Terminal etc is located)
Goto Analyzer Settings
Select scope analysis...
var button = new Button{Content = "Google"};
form.Controls.Add(button);
button.Clicked+=(s,e)=>{
webView.Source = new Uri("https://www.google.com.hk/");
};
I ran into the same error Template format error: Invalid outputs property : [Type, Properties] , because I added a couple new resources, but they were below the Outputs block (I just threw the new resources in at the end of the template) but they need to be in the resource block.
any one hasidea how i could fix this erro Error: ENOENT: no such file or directory, lstat '/vercel/path0/.next/server/app/(app)/page_client-reference-manifest.js'
I was dealing with this problem earlier today with SQL Server 2017. The ODBC driver didn't seem to matter as I would have spotty connection issues (some would timeout others would work just fine). Setting it to use the IP Address instead of the hostname in the ODBC connection string worked.
The issue is you are importing "match" while also having a variable named "match". Name it something else and you should be fine.
Could you double-check the existing configuration in the .env
file to ensure it reflects the latest updates? Auth0 has changed some property names in the most recent version.
In my case:
AUTH0_BASE_URL
should be APP_BASE_URL
AUTH0_ISSUER_BASE_URL
should be AUTH0_DOMAIN
You can refer to the latest documentation here for more details.
The complete list of updated environment variable names is as follows:
AUTH0_SECRET='use [openssl rand -hex 32] to generate a 32 bytes value'
APP_BASE_URL='http://localhost:3000'
AUTH0_DOMAIN='https://xxx.auth0.com'
AUTH0_CLIENT_ID='{yourClientId}'
AUTH0_CLIENT_SECRET='{yourClientSecret}'
I don't think there is currently any way to do this without a copy unless you use a sketchy technique like the one you talked about. It's probably best to ask this on Github as a new feature/performance idea.
i am facing the same problem. i asked on gpt, and on every AI chat. i cant find any satisfied answer.
You can raise PydanticCustomError
from pydantic_core
, instead of ValueError
.
Your Pydantic Model will be something like this:
from datetime import date
from typing import Optional
from pydantic import (
BaseModel,
field_validator,
HttpUrl,
EmailStr
)
from pydantic import ValidationError
from pydantic_core import PydanticCustomError
class Company(BaseModel):
company_id: Optional[int] = None
company_name: Optional[str]
address: Optional[str]
state: Optional[str]
country: Optional[str]
postal_code: Optional[str]
phone_number: Optional[str]
email: Optional[EmailStr] = None
website_url: Optional[HttpUrl] = None
cin: Optional[str]
gst_in: Optional[str] = None
incorporation_date: Optional[date]
reporting_currency: Optional[str]
fy_start_date: Optional[date]
logo: Optional[str] = None
@field_validator('company_name')
def validate_company_name(cls, v):
if v is None or not v.strip():
raise PydanticCustomError(
'value_error', # This will be the "type" field
'Company name must be provided.', # This will be the "msg" field
)
return v
If you want a more sophisticated solution, you can view more about this discussion on Pydantic Repository. But basically you can create a WrapperClass to use with Annoted
type from typing
module.
I am gonne give my example because have also the ValidationInfo
parameter in the validation field method
import inspect
from pydantic import (
ValidationInfo,
ValidatorFunctionWrapHandler,
WrapValidator,
)
from pydantic_core import PydanticCustomError
class APIFriendlyErrorMessages:
"""
A WrapValidator that catches ValueError and AssertionError exceptions and
raises a PydanticCustomError with the message from the original exception,
while removing the error type prefix, which is not user-friendly.
"""
def __new__(cls, validator: Callable[[Any], None]) -> WrapValidator:
"""
Wrap a validator function with a WrapValidator that catches ValueError and
AssertionError exceptions and raises a PydanticCustomError with the message
from the original exception, while removing the error type prefix, which is
not user-friendly.
:param validator: The validator function to wrap.
:returns: A WrapValidator instance that prettifies error messages.
"""
# I added this, in the discussion he used just with "v" value
signature = inspect.signature(validator)
# Verify if the validate function has validation info parameter
has_validation_info = any(
param.annotation == ValidationInfo
for _, param in signature.parameters.items()
)
def _validator(
v: Any, handler: ValidatorFunctionWrapHandler, info: ValidationInfo
):
try:
# If not have validation info, call just with v
if not has_validation_info:
validator(v)
else:
# Or Else call with v and info
validator(v, info)
except ValueError as exc:
# This is the same Pydantic Custom Error we used before
raise PydanticCustomError(
'value_error',
str(exc),
)
return handler(v)
return WrapValidator(_validator)
And in my model:
from datetime import datetime
from decimal import Decimal
from typing import Annotated, Optional
from pydantic import BaseModel, Field, ValidationInfo, field_validator
from app.api.transactions.enums import PaymentMethod, TransactionType
from app.utils.schemas import APIFriendlyErrorMessages # Importing my Custom Wrapper
# Validate Function
def validate_total_installments(value: int, info: ValidationInfo) -> int:
if value > 1 and info.data['method'] != PaymentMethod.CREDIT_CARD:
# Raising ValueError
raise ValueError('Pagamentos a vista não podem ser parcelados.')
return value
# Annoted Type using the Wrapper and the validate Function
TotalInstallments = Annotated[int, APIFriendlyErrorMessages(validate_total_installments)]
class TransactionIn(BaseModel):
total: Decimal = Field(ge=Decimal('0.01'))
description: Optional[str] = None
type: TransactionType
method: PaymentMethod
total_installments: TotalInstallments = Field(ge=1, default=1) # Using your annoted type here
executed_at: datetime
bank_account_id: int
category_id: Optional[int] = None
I expect that help you.
I cleared derived data -> reset package cache -> activity monitor -> Xcode -> force quit
I had also faced same issue, you need to register graybox OPC automation dll file after which you will be ablle to communicate with any OPC DA server
Download DLL from here.
Open command line as Administrator and then change path to Folder that contains DLL and then write
regsvr32 "name of dll"
For OPC DA try to use lower versions of python like below 3.10 also you can explore OpenOPC-DA
In my raspberry wpa_supplicant.conf is located inside a subdirectory wpa_supplicant.
So
/etc/wpa_supplicant/wpa_supplicant.conf
just a note
It seems your browser is making some cache of your request. The browser sometimes cache requests with the same url. Or, must be the OPTIONS request is being ignored by your microcontroller
I found a rather simple formula to recognize empty ranges. It goes like this:
=IF(ARRAYFORMULA(AND(H5:H36="")),"empty","not empty")
Where H5:H36 is a sample range (a column in this case), and "empty", "not empty" can be replaced with other statements.
Ok the question revealed the answer (clarifying that NAME is not dimensional). The solution that seems most clear is something like the following. Note I'm also joining another table D that only joins on A.ID to demonstrate it must come after the joins on B,C.
Please scrutinize.
with NAME as (
select distinct A_ID, NAME from B
union
select distinct A_ID, NAME from A
)
select distinct a.ID as 'A_ID', b.NAME as 'B_NAME', c.NAME as 'C_NAME', B.etc., C.etc., D.etc.
from A a
inner join NAME n on n.A_ID = a.ID
full join B on a.ID = b.A_ID and n.NAME = b.NAME
full join C on c.ID = c.A_ID and n.NAME = c.NAME and (b.NAME = c.NAME or b.NAME is null)
where (b.NAME is not null or c.NAME is not null)
#[cfg(test)]
use test_env_helpers::*;
#[after_all]
#[cfg(test)]
fn after_all() {
cleanup_tests();
}
I found this crate https://docs.rs/test-env-helpers/latest/test_env_helpers/ to be very helpful with cleaning up test code after running docker testcontainers using oncecell
I am getting the same error intermittently in production. It does not reproduce on my local
Ran into this error when running pip3 install <mymodule> .
. I checked that I had no version conflict.
What fixed it, was upgrading pip (to version 23.0.1
) :
pip3 install --upgrade pip
Should read the image from node js file path and insert it as a blob. Then it works.
fs.readFile(path+image.filename, function(err, data) {
if (err) throw err
var sql = 'Insert into table (id,image) VALUES ?';
var values=[["1",data]];
connection.query(sql,[values], function (err, data) {
if (err) {
// some error occured
console.log("database error-----------"+err);
} else {
// successfully inserted into db
console.log("database insert sucessfull-----------");
}
});
})
This is only available when using Shiny. A Quarto document with OJS and R only the OJS is dynamic. Anything in R is static. I think of it as a set-up and interact partnership. R set's up the data that can be visualised and interacted with using OJS elements.
Coming from R I found Arquero to be a big help. It's similar enough to dplyr that you can run small calculations on your dynamic inputs in order to create dynamic outputs.
It is clear that Windows limits the VRAM limit for a single program:
Matlab is able to utilize only a part of actual available VRAM
But the specific proportions don't quite match, perhaps Microsoft has made adjustments.
If your tensor is not boolean or integer type, make it this way:
t_opp=1-t
from moviepy.editor import *
from pydub.generators import Sine
from pydub import AudioSegment
# Regenerar audio base
voice = Sine(180).to_audio_segment(duration=8000).apply_gain(-15)
beat = Sine(100).to_audio_segment(duration=8000).apply_gain(-20)
mix = beat.overlay(voice)
# Exportar audio MP3
audio_path = "/mnt/data/saludo_piero_26is_lofi.mp3"
mix.export(audio_path, format="mp3")
# Generar video con imagen
image_path = "/mnt/data/A_digital_image_combining_text_and_a_gradient_back.png"
audio = AudioFileClip(audio_path)
clip = ImageClip(image_path).set_duration(audio.duration).set_audio(audio)
# Exportar como video MP4 final
video_path = "/mnt/data/saludo_piero_26is_final.mp4"
clip.write_videofile(video_path, fps=24)
video_path
Switching from GPT-4.1 to Claude Sonnet 4 fixed this for me
I did another example following the other answer using the element plus playground, which is using a more recent version too: element-plus.run/