You should consider adding use client at the top of your component. Read more about it on the official docs
With JPMS you’ll likely need to merge everything into a single module manually kind of like when you download p999 and bundle all assets into one clean package.
Your standard output stream (sys.stdout) is being redirected or replaced by some startup configuration, site customization, or environment hook that loads before your code runs
Mark! In case you're still wondering how to do this, I just got it working by using [openPanelObj setDirectoryURL: [NSURL fileURLWithPath: NSS(woof)]], where woof is the filename concatenated to the directory path and NSS is a #define that makes an NSString from a C string. It was a problem for me for 13 years until I reread Michael Robinson's answer here: How can I have a default file selected for opening in an NSOpenPanel?
Thanks! I used the 2nd suggestion, by Michal and it worked! I appreciate your help!
I put the following in the first cell of every notebook needing wrapping:
from IPython.display import HTML, display
def set_css(): display(HTML('\n<style>\n pre{\n white-space: pre-wrap;\n}\n</style>\n'))
get_ipython().events.register('pre_run_cell',set_css)
I found that was a problem with the system gesture navigation.
The left back gesture at landscape mode is working strangely, while a right back gesture or three button navigation are working fine.
How did you manage to to convert YUV[] to RGB[]? I am trying to read the IntPtr param in the callback but i always get an Access Violation even if teh documentation states that depending on the type it should contain some data
how did you achieve this in the end? Finding memory regions in TTD recording that is.
Sure, there are several ways to convert images to PDF online. If you don't want to install any software or deal with watermarks, I suggest using a lightweight browser-based tool.
I recently used a free tool at PDFQuickly – Image to PDF and found it really fast and easy. You can upload JPG, PNG, or even WebP images and instantly convert them into a single PDF. There's no sign-up, no watermark, and it works across all devices — including mobile.
It also supports drag-and-drop reordering and bulk uploads, which is great when working with scanned images or screenshots.
Hope this helps someone looking for a quick and free solution.
With help from the other answer
<Box sx={{flex: 1, position: "relative"}}>
<DataGrid
style={{position: "absolute", width: "100%"}}
//...
/>
</Box>
The above code is a flex item. When placed in a flex container with flex-direction: column, it will fill its remaining vertical space.
I cannot see any code so it is hard to imagine what could be going on exactly.
but in general you should be able to monkey patch about anything in javascript as far as I know.
exactly where is the entry point for mocha I do not know, but for example you should be able to assign your custom implementation to any object on any fuction
like even this should be legal, for how it should work I guess you could go read source. I have no idea precisely.
performance.now=()=>{
//do stuff
}
Telegram: @plugg_drop, How to buy We3d, 3xotics in Dubai, most reliable market .
In United Arab Emirates , most reviewed Coffee Shop, can i buy C0ca in Dubai? yes you can get plugg here. Order Ecst1sy, MDMA and other everyhwere here . We drop safely for turists in trip in Abu Dhabi... order Marihua, Ganja, Apothek, Medcine, … Delivery available 24/7 for Wasl , Al Ain prospects.
We have the biggest network in the Emirates, we can deliver you Has*ish, in Masdar City. Al-Bahiyah we can get as many flavors as you want at any time. How to order ketami in Dubai. Turism is growing fast in Arab countries, and our company have to provide plugg service in Abou Dabi, at Zabeel Park, and also Ibn Battuta Mall. We need adress to drop Flowers, its illegal and we try to provide you an discreet delivery. Some new comer dont know how to buy C0ca in Dubai, but its easy to buy lemmon haze here, also in Ajman. For that purpose you only need to contact the best in Al Dhafra Telegram ID: @plugg_drop. Just Send your order Canadian loud one of our best-seller in Oumm #al_Qaïwaïn.
Where to get vape , Cartridges in Dubai, How many puffs average available in Palm Island, UAE. It’s good place for a trip and you want to smoke Kush in Abu - Dhabi . Its why we have recored some visitor question and we rise the best place to obtains sparkling X3 Filtred in Charjah (Dubai) we have a structured delivery system , its handled for help any one to pay and get delivered Moonrocks in Fujairah, Dubai. No Prescription need
How is safe to buy LSD Bottler in Dubai with you .
We delivery C*cain in Abu Dhabi 24/7 Give us our location the delivery of ketam** or purple Haz* will be in a safe place near of you . The payment process in Dubai is fastly in the Emirates , we use Cryptocurenccies like USDT , TRONX, Ethereum, … you can also use your Visa Card for buy We3d in United Arab Emirates. Don’t worry about buy Fishscale in Jebel Ali Inbox @plugg_drop with telegram app.
While many ask how to buy exotics in Dubai we afford best service delivery of Molly and manies other stuff. We manage to have a good reliable customers plan . You can order Sparkling in Mushrif Park (Dubai) or you can get M0lly at Burj Al (Dubai) all is do in Dubai to like your trip .
Did you like Falkaw race in United Arab Emirates our service permitted that you can be drop your Kus8 directly there . Dubai is not only conservatory place, all the world are present here they need to have their lifestyle in al'iimarat alearabiat almutahidat. Best Camel races are in Dubai , you must see it for real.
While some countries have embraced legalization, buying weed in Dubai remains a serious offense. Tourists and residents must adhere to the local laws, as the authorities conduct stringent checks to maintain compliance.
Our Black Market in Dubai have some code to know who is Best Plugg , you can use Uber to buy Topshelf in Dubai , you can also Use Cyber Truck to buy Marihua in Abu Dabi.
Available In Dubai Market
We3d
C0ca
H1shis
MDM
L5D
Ecst1sy
2CB
Mushr00m
Her01n
Gummies
Puffs
Moonr0cks
Telegram: @plugg_drop
Add custom CSS to your Quarto presentation
Create a CSS file, for example custom.css, and add this:
.outside-right {
margin-right: -5em;
}
.outside-left {
margin-left: -5em;
}
.outside-both {
margin-left: -5em;
margin-right: -5em;
}
Solved. I added the needed pypi repo. We have a company clone of pypi with approved libraries but also my team has its own where we place things we produce. That's where my plugin resides so my project needed to reference that repo too. Fair enough.
font-display: swap; in your CSS code tells the browser to use the default font when the desired font is not available immediatly. As soon as it's available, the browser swaps the font.
You can change it to
font-display: block;
There is a library for exactly this usage profile_name_avatar.
Supports network and local image with caching and has placeholder and name as fallback.
import 'package:profile_name_avatar/profile_name_avatar.dart';
ProfileImage(
imageSource: "https://example.com/avatar.jpg",
placeholder: "assets/images/placeholder.png", // Fallback when imagesource fail
fallbackName: "J D", // Used when both above fail
radius: 100, // Optional
textStyle: TextStyle( // Optional
fontSize: 24,
fontWeight: FontWeight.bold,
color: Colors.white,
),
backgroundColor: Colors.orange, // Optional
)
Fallback example
enter image description here
You could put the format string into a query parameter:
sql = "select name, date_format(%s, birthdate) as date from People where name = %"
cursor.execute(sql, ("%Y-%m-%d", "Peter"))
Solved ! (thanks traynor and browsermator)
I have one angular interceptor where by default it adds 'Content-Type': 'application/json' .
So I just have to exclude the "/upload" path.
In Android Studio Narwhal, you can find that option to change location in Advanced Settings - Version Control - Use modal commit interface for Git and Mercurial to enable the local changes tab.
It seemed like the issue was me being on react 18 prevented this from working. Updating react and react dom (along with types) fixed this issue.
What happened, if you are using such combination ?
SELECT
TS.MasterID,
TS.RecordNr,
TR.MasterName
TR.Date1,
C.MasterName
FROM TABLE_SUB TS
INNER JOIN TABLE_C C ON TS.MasterID = C.MasterID
INNER JOIN TABLE_R TR ON TS.MasterID = TR.MasterID AND TS.RecordNr = TR.RecordNr
with Docker Desktop and WSL
docker was unable to pull "hello world" with the following error
what helped is: 3 dots on left bottom (near engine status) -> Troubleshoot -> Reset to factory settings
after that images are started pulling
what I tried before:
- set dns at Docker Engine config
- set dns at wsl
- etc.
Just inline the bind value using DSL.inline()
can you use this oficial instructions: https://code.visualstudio.com/docs/remote/troubleshooting#_cleaning-up-the-vs-code-server-on-the-remote
# Kill server processes
kill -9 $(ps aux | grep vscode-server | grep $USER | grep -v grep | awk '{print $2}')
# Delete related files and folder
rm -rf $HOME/.vscode-server # Or ~/.vscode-server-insiders
One natural way to handle this — especially if you're not strictly tied to real-time communication — is to use a message queue (like RabbitMQ, Kafka, or AWS SQS FIFO queues).
By having the sender push messages into a queue, and the receiver consume them in order, the queue itself can help preserve message ordering, retries, and even buffering when one side is temporarily unavailable.
This can offload complexity (like sequence number tracking) from your API/application layer and let infrastructure handle message delivery and ordering guarantees.
Considering you're using a REST API and not a socket-based solution, this approach can be especially helpful to enforce ordering and reliability in an asynchronous, stateless environment.
High_Traffic_Bangladshi_OBB.7z
1 Invalid password
You can disable by turning MCP discovery off in VS Code settings.
Settings -> Features -> Chat -> Mcp -> Discovery:enabled (set to false)
You could perform a git blame on every file in the later commit that is also in the earlier commit then look for lines with a commit id earlier or the same as your earlier commit
No. Docker on macOS does not support GPU passthrough, including access to Apple's Metal API or MPS backend inside containers.
This is because Docker runs inside a lightweight Linux virtual machine (via HyperKit or Apple Virtualization Framework), and GPU access is not exposed to that VM.
No. There are currently no supported Docker settings, flags, or experimental features that allow a container to access the host's Metal/MPS capabilities.
Even with all dependencies and PyTorch correctly installed inside the container, torch.backends.mps.is_available() will return False.
I made this little exe to close all msedgewebview2.exe process tree, on my Windows 11 it works flawelessy.
https://www.mediafire.com/file/1mri65opj0egagi/Chiudi+msedgewebview2.exe/file
Can you provide more details on how you get the radius for the bounding cylinder? Ideally, paste all your code. Can you also attach the step?
I suspect this is a XY problem. Why do you want to do that? Minimizing the amount of lines serves no practical purpose. If you want to make your file smaller, you should use a minifier which will compress way more than just line breaks.
Read: https://github.com/solana-foundation/anchor/pull/3663
Essentially, use/upgrade to Anchor version 0.31.1, Nightly version 1.88.0 then build your project with the IDL and then use nightly to generate the IDL(.json & .ts files)
I've run into this issue a few times, and in all cases, I was able to solve it by simply adding a few more slides to the Swiper.
From what I’ve noticed, Swiper’s loop mode requires a minimum number of slides to properly clone the elements and build the loop structure. When there aren’t enough slides, blank spaces may appear between them.
My suggestion is to try adding more slides to the Swiper and see if the problem persists. Most of the time, that fixes it.
Is there any reason you weren't using an Enterprise CA with certificate templates? All of the configurations you were adding to your INF file could be specified in a certificate template. To create the template, start by duplicating the "Workstation" or "Web Server" template since the enrollee is a computer. You could grant the target servers enroll permissions on that template.
Then, you can get certs using pure PowerShell (Administrative, since the key gets created in the machine store):
$Fqdn = [System.Net.Dns]::GetHostByName($env:computername).HostName
Get-Certificate -Template SharePointSts -CertStoreLocation Cert:\LocalMachine\My -DnsName ($fqdn, 'server1')
Give me 3 downvotes, going for a badge xD
Oh hey, I actually faced something really similar a while back when I was working on a little time tracking project for train arrivals and departures. Your current setup works fine when the arrival is after the scheduled time, which usually means there’s a delay. But yeah, when the train shows up earlier, the logic kinda breaks, right?
So what I did back then was add a small tweak to handle both late and early arrivals without messing up the output. Here's a slightly adjusted version of your function that might help:
functiontomdiff(t1, t2) { var m1 = hour2mins(t1); var m2 = hour2mins(t2); var diff = m2 - m1; if (diff < -720) { diff += 1440; } else if (diff > 720) { diff -= 1440; } return mins2hour(diff); }
This way, the function should return a positive value if the train is late and a negative one if it arrives early, which makes it easier to understand at a glance.
Also, just as a side note, when I was testing time inputs and checking if the differences were calculated correctly, I found Stundenrechner really useful. It’s basically a simple tool to calculate time differences and delays between two times, and honestly, it helped me catch a few bugs in my own setup.
Regards
Muhamamd Shozab
there are some components eg. n-grid n-flex n-space
read the related document
Did you find an answer or not? im also stuck in this
I have the same issue, did you find way to fix it?
('/mnt/data/dogum_gunu_video_20250807_162553.mp4',
'MoviePy error: the file /mnt/data/arabic_style_instrumental.mp3 could not be found!\nPlease check that you entered the correct path.')
convert it to TreeOption sturcture as type defined
This post helped me to get it running: https://www.sipponen.com/archives/4024
I don't know exactly what your goals are but I might have something for you:
Notion Page to present a trading bot
Update folks, it used to work, but not anymore, Browsers have changed their behavior due to clutter and mobile display considerations. You need to use JavaScript to make this happen now.
Using var body []byte like this creates slice with a capacity 0. Passing this into Read() means that it will try to fill the capacity-0 slice with bytes. Since the slice has no more capacity, it returns with no error, and with the "empty" slice.
Replacing it with body, err := io.ReadAll(resp.Body) works, since io.ReadAll() consumes the entire Reader and returns it as a []byte.
Just 1 line of solution ->
softwareupdate --install-rosetta --agree-to-license
This would work like a charm. Thank me later :)
After adding @rendermode InteractiveServer, the data now loads properly with virtualization and scrolling.
Explanation:
Virtualization requires interactivity between the client and server. Without explicitly setting the render mode, Blazor renders the component as static HTML (non-interactive). By using @rendermode InteractiveServer, the component is rendered as an interactive server-side Blazor component, which supports virtualization and dynamic data loading.
Hope this helps someone else facing the same issue!
The 8171 Check Online is a digital portal provided by the Benazir Income Support Programme (BISP) and Ehsaas initiative, allowing users to enter their 13‑digit CNIC to instantly check payment eligibility and disbursement status. It simplifies access to programs like the Rs. 25,000 or Rs. 13,500 relief by providing real-time updates on eligibility and payment progress
@Rex Linder:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SoftwareProtectionPlatform\BackupProductKeyDefault
Is the location for a generic key. It is not the final key, the final key is encrypted.
Best regards,
Steaky
This is very useful to me, thanks! I took the code from the accepted answer above and generalized it a bit, so that it works with any number of interiors, and only needs one call to fig.add_trace:
import plotly.graph_objects as go
from shapely import Point
# Make some circles
shape_a = Point(0, 0).buffer(10) # Exterior shape
shape_b = Point(0, 0).buffer(2) # interior hole
shape_d = Point(5, 0).buffer(1) # interior hole
# subtract holes using shapely
shape_c = shape_a - shape_b - shape_d
# The exterior shape gets added to the coordinates first
x, y = (list(c) for c in shape_c.exterior.xy)
for interior in shape_c.interiors:
# `None` denotes separated loops in plotly
x.append(None)
y.append(None)
# Extend with each interior shape
ix, iy = interior.xy
x.extend(ix)
y.extend(iy)
fig = go.Figure(layout=go.Layout(width=640, height=640))
fig.add_trace(go.Scatter(x=x, y=y, fill="toself"))
fig.show()
OK, that was my mistake! I've forgot to set private point dns in devops agent!
10.106.99.15 app-xxx-yada.scm.azurewebsites.net
10.106.99.15 app-xxx-yada.azurewebsites.net
You're seeing your model change correct answers just because a user says something different—like saying Paris is the capital of France, then later agreeing with a user who incorrectly says it's Berlin.
This happens because large language models are designed to be helpful and agreeable, even if the user is wrong. They also rely heavily on the conversation history, which can confuse them if the user contradicts known facts.
To fix this:
Set clear system instructions telling the model to stick to the retrieved facts and not blindly follow the user.
Improve your retrieval quality so the right information (like “Paris is the capital of France”) always appears in the supporting context.
Add a validation step to check whether the model’s answer is actually backed by the retrieved content.
Clean or limit the chat history, especially if the user introduces incorrect information.
If needed, force the model to only answer based on what’s retrieved, instead of using general knowledge or previous turns.
InteliJ
Settings > Version Control > Git > Update > Use credential helper
I checked 'Use credential helper' and (Force) push worked without me doing anything..
.toISOString() is a JS function, not PostgreSQL, so it can't be passed raw to the db.
You'll need to calculate that value in the db using a PostgreSQL RPC or a built-in function like TO_CHAR(created_at, 'YYYY-MM-DD"T"HH24:MI:SSZ').
It's above Sunyatasattva's answer, but I added a small logic to be compatible with Web Workers.
import { useCallback, useRef, useState } from 'react'
const DEFAULT_LONGPRESS_INTERVAL_MS = 700
function preventDefault(e: Event) {
if (!isTouchEvent(e)) return
if (e.touches.length < 2 && e.preventDefault) {
if (typeof e.cancelable !== 'boolean' || e.cancelable) {
e.preventDefault()
}
}
}
export function isTouchEvent(e: Event): e is TouchEvent {
return e && 'touches' in e
}
interface PressHandlers<T> {
onLongPress: (e: React.MouseEvent<T> | React.TouchEvent<T>) => void,
onClick?: (e: React.MouseEvent<T> | React.TouchEvent<T>) => void,
}
interface Options {
delay?: number
shouldPreventDefault?: boolean
}
export default function useLongPress<T>(
{ onLongPress, onClick }: PressHandlers<T>,
{
delay = DEFAULT_LONGPRESS_INTERVAL_MS,
shouldPreventDefault = true,
} : Options = {}
) {
const [longPressTriggered, setLongPressTriggered] = useState(false)
const timeout = useRef<NodeJS.Timeout>()
const target = useRef<EventTarget>()
const start = useCallback((e: React.MouseEvent<T> | React.TouchEvent<T>) => {
e.persist()
const clonedEvent = { ...e }
if (shouldPreventDefault && e.target) {
e.target.addEventListener('touchend', preventDefault, { passive: false })
target.current = e.target
}
timeout.current = setTimeout(() => {
onLongPress(clonedEvent)
setLongPressTriggered(true)
}, delay)
}, [onLongPress, delay, shouldPreventDefault])
const clear = useCallback((
e: React.MouseEvent<T> | React.TouchEvent<T>,
shouldTriggerClick = true,
) => {
timeout.current && clearTimeout(timeout.current)
shouldTriggerClick && !longPressTriggered && onClick?.(e)
setLongPressTriggered(false)
if (shouldPreventDefault && target.current) {
target.current.removeEventListener('touchend', preventDefault)
}
}, [shouldPreventDefault, onClick, longPressTriggered])
return {
onMouseDown: (e: React.MouseEvent<T>) => start(e),
onTouchStart: (e: React.TouchEvent<T>) => start(e),
onMouseUp: (e: React.MouseEvent<T>) => clear(e),
onMouseLeave: (e: React.MouseEvent<T>) => clear(e, false),
onTouchEnd: (e: React.TouchEvent<T>) => clear(e),
}
}
me helped next:
rm -rf .idea/
mvn dependency:purge-local-repository
then close Idea
reopen project
hey you can enter android studio contents then get info update the sharing&permissions to read and write
To implement authentication in Laravel 12 and later versions using Sanctum, follow these steps:
Laravel Sanctum uses session-based authentication for routes defined in web.php. This works out of the box if you're already using the default Laravel authentication system (e.g., login via form, CSRF protection, etc.).
For API routes, Sanctum expects token-based authentication. To access these routes:
First, generate a token for the user:
$token = $user->createToken('api-token')->plainTextToken;
Then, in your API requests, include the token in the Authorization header like this:
Authorization: Bearer <your-token-here>
This token is stored in the personal_access_tokens table and is used to authenticate API requests.
you can follow this Stackoverflow thread.
Nevermind, I found out how to do it. In the grid:
columns.Bound(p => p.ProductionLogID).Title("ID").Width(200)
.Filterable(f => f.Cell(cell => cell.Template("IDFilter").ShowOperators(true)));
In javascript:
function IDFilter(e) {
e.element.kendoNumericTextBox({format:"#"});
}
there is a very simple procedure to do this, kindly follow these steps:
STEPS:
Open the document or template where you want this shortcut key functionality.
Record the Macro:
Press Alt + F8 to open the Macros dialog.
If you have vba code:
click on developer tab
click on the record macro and giev a name and saved it.
click on stop recording
go to Macros
click your desired macro
click on edit
write or paste your vba code
ctrl +S
closs all and save.
Assign the Macro to a Shortcut Key:
https://artisthindustani.blogspot.com/2025/08/how-to-use-vba-code-and-assign-macro.html
string partial_information = "";
dynamic obj;
while (...)
{
... (out information);
if (partial_information == "")
{
try
{
obj = JsonConvert.DeserializeObject(information);
}
catch (Newtonsoft.Json.JsonReaderException ex)
// 'information' only contains the first part of the actual information
{
partial_information = information;
}
}
else
{
obj = JsonConvert.DeserializeObject(partial_information + information);
// in the previous loop, some 'information' was written to 'partial_information'.
// Now the concatenation of both is used for deserialising the info.
partial_information = ""; // don't forget to re-initialise afterwards
}
if (obj.Some_Property != null) // <-- Compiler error (!!!)
{
با این حال، این کامپایل نمیشود: خط خطای کامپایلر if (obj.Some_Property != null)ایجاد میکند CS1065:Use of unassigned local variable 'obj' : .
به نظر من، این بیمعنی است، همانطور که objحتی خارج از کل [موضوع] اعلام شده است.while -loop نیز اعلام شده است.
چطور میتوانم این را مدیریت کنم؟
I check this document about it https://blazorise.com/docs/extensions/datagrid/getting-started
RowSelectable
Handles the selection of the DataGrid row. If not set it will default to always true.
Func<TItem,bool>
And there is a RowSelectable attribute can you try implementing this , i dont know how your code structure if you struggle to implements this please share your code where to use it.
What I understand is you need user products + global products.
So in this case we need a OR condition where('isGlobal', 1)
$products = Product::where('user_id', $user->id)
->orWhere('isGlobal', 1)
->get();
result: this will give me all specific user productrs + gloabl products
If you want the container id without the container running, this solves it:
docker inspect --format="{{.Id}}" <container_name> | sed 's/sha256://g'
من در حال تلاش برای ایجاد یک google_eventarc_trigger در ماژول Terraform هستم تا هنگام آپلود فایلها در یک پوشه خاص در سطل GCS من، مطلع شوم. با این حال، نتوانستم راهی برای تعریف الگوی مسیر در Terraform پیدا کنم. چگونه میتوانم این کار را انجام دهم؟ این کد من است.
resource "google_eventarc_trigger" "report_file_created_trigger" {
name = "report-file-created-trigger"
location = var.location
service_account = var.eventarc_gcs_sa
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
matching_criteria {
attribute = "bucket"
value = var.file_bucket
}
destination {
cloud_run_service {
service = google_cloud_run_v2_service.confirm_report.name
region = var.location
}
}
Your problem is probably due to using Python 3.11 with TensorFlow — they’re not fully compatible yet. I recommend using Python 3.10 instead.
An easy way to manage different Python versions is with Anaconda. You can read the installation guide here: https://www.anaconda.com/docs/getting-started/anaconda/install
Then just create a new environment like this:
conda create -n venv python=3.10
conda activate venv # Now you're using the Python version inside venv
conda install tensorflow # Or use pip: pip install tensorflow
More info here: Conda create and conda install
This should fix the DLL error. Good luck!
If you want the container id without the container running, this solves it:
docker inspect --format="{{.Id}}" <container_name> | sed 's/sha256://g'
MonthX Unique Events= CALCULATE(DISTINCT('Incident Tracker'[Incident Name]),'Incident Tracker' [Incident Date] >= DATE(2024,1,1) && 'Incident Tracker' [Incident Date] <= DATE(2024,10,1))
If you have a separate date table and a slicer on it (to make it more dynamic) i would consider using VALUES and TREATAS in the filter argument of CALCULATE
No problem, just add a separate file or section about nvim-tree/nvim-web-devicons and put your custom configuration there. Lazy will look for a plugin spec before loading it and if it finds one or more it will merge them and evaluate them.
Just make sure you're not calling setup somewhere else before that file/section is evaluated by Lazy. If you do, you should pass your configuration to that setup call.
You’re building a voice assistant using AIML for dialog management.
You use std-startup.xml to tell the bot to load another AIML file (output.aiml).
Your pattern in std-startup.xml must exactly match what you pass in Python.
The AIML loader line must be:
python
CopyEdit
kernel.bootstrap(learnFiles="std-startup.xml", commands="LOAD AIML B")
Your std-startup.xml should look like:
xml
CopyEdit
<aiml version="1.0.1"><category><pattern>LOAD AIML B</pattern><template><learn>output.aiml</learn></template></category></aiml>
Your output.aiml must include valid categories like:
xml
CopyEdit
<category><pattern>I WANT A BURGER</pattern><template>Sure! What kind?</template></category>
Make sure all files are in the same directory.
The command LOAD AIML B is case-sensitive.
Add this to debug:
python
CopyEdit
print(kernel.numCategories()) # Should be > 0
Now run it, and the bot will respond to I WANT A BURGER.
No, you can't read metadata directly from an <img> tag.
POLLEE fashion 🌍
Digital economy world profile image e-commerce Shopify store Amazon CNBC stock
Google payment profile wallet supported Google pay
GDPR heppy 10 year working journey
Modern sellary with other found
Digital asset
Sopported EU staff union member supported working journey POLLEE tree sitters 🌴 hello world 🌎 advisor marketing buy all social media with publisher Nasiruddin from Bangladesh
Bikash mobile banking app account 01994343295
AIBL bank progoti soroni bunch Dhaka 1212
Generic font families:
serif: Fonts with small decorative lines (serifs) at the ends of strokes. Examples include Times New Roman and Georgia.
sans-serif: Fonts without serifs. Examples include Arial and Verdana.
None of these generic families are designed to display icons, and attempting to use them as a fallback for an icon font would result in incorrect or unreadable characters being displayed instead of the intended icons.
Therefore, when using icon fonts, it is crucial to ensure that the specific icon font file is properly loaded and accessible, as there is no universal generic fallback that can replicate its functionality.
Adding to gitlab-ci.yml helped me:
variables:
PATH: /home/gitlab-runner/.nvm/versions/node/v18.12.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Maybe it will be useful for someone.
View Templates applied to the view
Filters
Worksets that are turned off
Links that are unloaded
View range or far depth clip
Design Option
Detail level
Phase settings
Autodesk's Revit Visibility Troubleshooting Guide: https://www.autodesk.com/support/technical/article/caas/guidedtroubleshooting/csp/3KGII0aL3ToAA8ArgwyLs4.html
Do you have a codeql-pack.yml file with the codeql/cpp-all pack as dependency?
If not, it might be easiest to set this up using the VS Code extension, see the documentation. This can also be done by using the command ">CodeQL: Create query" in VS Code.
Note that your query uses dataflow API which has been deprecated and removed in newer versions, see the changelog. So you would either have to adjust the query code to use the new API or use codeql/cpp-all: ^2.1.1 (which seems to be the last version which still contains the old dataflow API).
In the directory which contains your codeql-pack.yml and your query file, you can then run the following commands (or use the corresponding VS Code extension commands):
codeql pack installcodeql-pack.yml. Respectively codeql pack ci if you have a codeql-pack.lock.yml lock file and only want to install the versions specified in it.database analyze ... my-query.qlIf you came here looking for it while on rails 8, you can find it here: https://github.com/rails/propshaft/blob/e49a9de659ff27462015e54dd832e86e762a6ddc/lib/propshaft/railties/assets.rake#L4
you only need to clean the cache by running this:
npm cache clean --force
In my case this worked.
People make stupid mistakes like letting people fetch a refresh-token using prior access-token... Don't do that. It's supposed to only exist through small time intervals. Essentially, if a bad actor steal an access-token they can just keep refreshing the same token indefinetely. Second issue, is people who only uses JWT for everything. Some user actions like changing password should have JWT and TFA before letting users change their password. Depending on the level of security, severity of damage done through one action - Implement more than one auth checks
The main benefit of JWT - is that it reduces DB calls by being stateless, every actions a user take doesn't make the backend request a new user instance
Try luaToEXE, which includes a Python library with a graphical user interface and a command-line tool: Intro
// Твои то// Твои точки
Point[] РєРЅРѕРїРєРё4 = {
Point.get(624, 170),
Point.get(527, 331),
Point.get(628, 168),
Point.get(525, 189)
};
Point[] РєРЅРѕРїРєРё3 = {
Point.get(689, 310),
Point.get(979, 1029),
Point.get(1243, 662)
};
// Область для чтения числа
Point левыйВерх = Point.get(674, 363);
Point правыйНиз = Point.get(726, 401);
// ТОКЕН БОТА и ВАШ АККАУНТА ID Telegram
String tgToken = "bot";
String tgChatId = "id";
// РќРР–Р• РџРћР§РўР РќРЧЕГО РќР• ТРОГАЕМ!
pfc.setOCRLang("eng");
pfc.startScreenCapture(2);
while (!EXIT) {
String текстЧисло = pfc.getText(левыйВерх, правыйНиз);
pfc.log("OCR text: '" + текстЧисло + "'");
// убираем все запятые
текстЧисло = текстЧисло.replace(",", "");
// оставляем только цифры
текстЧисло = текстЧисло.replaceAll("\[^0-9\]", "");
if (текстЧисло.length() \< 2) {
pfc.log("Слишком короткое число, пропускаем");
continue;
}
double число = 999999;
try {
число = Double.parseDouble(текстЧисло);
} catch (Exception e) {
pfc.log("Не удалось распарсить число: '" + текстЧисло + "'");
continue;
}
pfc.log("Число: " + число);
if (число \<= 125) { // \<= чтобы 1299 тоже сработало
pfc.log("Число меньше или равно 1299, нажимаем 3 кнопки покупки");
for (int i = 0; i \< РєРЅРѕРїРєРё3.length; i++) {
pfc.click(РєРЅРѕРїРєРё3\[i\]);
pfc.sleep(550);
}
// Отправляем сообщение в Telegram
String msg = "За " + (int)число + " звезд улов NFT подарка 🎉";
pfc.sendToTg(tgToken, tgChatId, msg);
pfc.log("Отправлено сообщение в Telegram: " + msg);
} else {
pfc.log("Число больше 1299, нажимаем 4 кнопки");
for (int i = 0; i \< РєРЅРѕРїРєРё4.length; i++) {
pfc.click(РєРЅРѕРїРєРё4\[i\]);
pfc.sleep(850);
}
}
}
Through 15 years of exponential traffic growth from both Double 11 and Alibaba Cloud, we built LoongCollector, an observability agent that delivers 10x higher throughput with 80% reduction in resource usage than open-source alternatives, proving that extreme performance and enterprise reliability can coexist under the most demanding production loads.
Back in the early 2010s, Alibaba’s infrastructure was facing a tidal wave: every Singles’ Day (11.11), traffic would surge to record-breaking levels, pushing our systems to their absolute limits. Our observability stack—tasked with collecting logs, metrics, and traces from millions of servers—was devouring CPU and memory just to keep up. At that time, there were no lightweight, high-performance agents on the market: Fluent Bit hadn’t been invented, Vector was still a distant idea, Logstash was a memory-hungry beast.
The math was brutal: Just a 1% efficiency gain in data collection would save us millions across our massive infrastructure. When you’re processing petabytes of observability data every day, performance isn’t optional—it’s mission-critical.
So, in 2013, we set out to build our own: a lightweight, high-performance, and rock-solid data collector. Over the next decade, iLogtail (now LoongCollector) was battle-tested by the world’s largest e-commerce events, the migration of Alibaba Group to the cloud, and the rise of containerized infrastructure. By 2022, we had open-sourced a collector that could run anywhere—on bare metal, virtual machines, or Kubernetes clusters—capable of handling everything from file logs and container output to metrics, all while using minimal resources.
Today, LoongCollector powers tens of millions of deployments, reliably collecting hundreds of petabytes of observability data every day for Alibaba, Ant Group, and thousands of enterprise customers. The result? Massive cost savings, a unified data collection layer, and a new standard for performance in the observability world.
When processing petabytes of observability data costs you millions, every performance improvement directly impacts your bottom line. A 1% efficiency improvement translates to millions in infrastructure savings across large-scale deployments. That's when we knew we had to share these numbers with the world.
We ran LoongCollector against every major open-source alternative in controlled, reproducible benchmarks. The results weren't just impressive—they were game-changing.
Rigorous Test Methodology
Maximum Throughput: LoongCollector Dominates
| Log Type | LoongCollector | FluentBit | Vector | Filebeat |
|---|---|---|---|---|
| Single Line | 546 MB/s | 36 MB/s | 38 MB/s | 9 MB/s |
| Multi-line | 238 MB/s | 24 MB/s | 22 MB/s | 6 MB/s |
| Regex Parsing | 68 MB/s | 19 MB/s | 12 MB/s | Not Supported |
📈 Breaking Point Analysis: While competitors hit CPU saturation at ~40 MB/s, LoongCollector maintains linear scaling up to 546 MB/s on a single processing thread—the theoretical maximum of our test environment.
Resource Efficiency: Where the Magic Happens
The real story isn't just raw throughput—it's doing more with dramatically less. At identical 10 MB/s processing loads:
| Scenario | LoongCollector | FluentBit | Vector | Filebeat |
|---|---|---|---|---|
| Simple Line (512B) | 3.40% CPU 29.01 MB RAM |
12.29% CPU (+261%) 46.84 MB RAM (+61%) |
35.80% CPU (+952%) 83.24 MB RAM (+186%) |
Performance Insufficient |
| Multi-line (512B) | 5.82% CPU 29.39 MB RAM |
28.35% CPU (+387%) 46.39 MB RAM (+57%) |
55.99% CPU (+862%) 85.17 MB RAM (+189%) |
Performance Insufficient |
| Regex (512B) | 14.20% CPU 34.02 MB RAM |
37.32% CPU (+162%) 46.44 MB RAM (+36%) |
43.90% CPU (+209%) 90.51 MB RAM (+166%) |
Not Supported |
The Performance Breakthrough: 5 Key Advantages
Traditional Approach: Traditional log agents create multiple string copies during parsing. Each extracted field requires a separate memory allocation, and the original log content is duplicated multiple times across different processing stages. This approach leads to excessive memory allocations and CPU overhead, especially when processing high-volume logs with complex parsing requirements.
LoongCollector's Memory Arena: LoongCollector introduces a shared memory pool (SourceBuffer) for each PipelineEventGroup, where all string data is stored once. Instead of copying extracted fields, LoongCollector uses string_view references that point to specific segments of the original data.
Architecture:
Pipeline Event Group
├── Shared Memory Pool (SourceBuffer)
│ └── "2025-01-01 10:00:00 [INFO] Processing user request from 192.168.1.100"
├── String Views (zero-copy references)
│ ├── timestamp: string_view(0, 19) // "2025-01-01 10:00:00"
│ ├── level: string_view(20, 4) // "INFO"
│ ├── message: string_view(26, 22) // "Processing user request"
│ └── ip: string_view(50, 13) // "192.168.1.100"
└── Events referencing original data
Performance Impact:
| Component | Traditional | LoongCollector | Improvement |
|---|---|---|---|
| String Operations | 4 copies | 0 copies | 100% reduction |
| Memory Allocations | Per field | Per group | 80% reduction |
| Regex Extraction | 4 field copies | 4 string_view refs | 100% elimination |
| CPU Overhead | High | Minimal | 15% improvement |
Traditional Approach: Traditional log agents create and destroy PipelineEvent objects for every log entry, leading to frequent memory allocations and deallocations. This approach causes significant CPU overhead (10% of total processing time) and creates memory fragmentation. Simple global object pools introduce lock contention in multi-threaded environments, while thread-local pools fail to handle cross-thread scenarios effectively.
LoongCollector's Event Pool Architecture: LoongCollector implements intelligent object pooling with thread-aware allocation strategies that eliminate lock contention while handling complex multi-threaded scenarios. The system uses different pooling strategies based on whether events are allocated and deallocated in the same thread or across different threads.
Thread Allocation Strategy:
1) Same-Thread Allocation/Deallocation
┌──────────────────┐
│ Processor Thread │──── [Lock-free Pool] ──── Direct Reuse
└──────────────────┘
When events are created and destroyed within the same Processor Runner thread, each thread maintains its own lock-free event pool. Since only one thread accesses each pool, no synchronization overhead is required.
2) Cross-Thread Allocation/Deallocation
┌────────────────┐ ┌─────────────────┐
│ Input Thread │────▶│ Processor Thread│
└────────────────┘ └─────────────────┘
│ │
└── [Double Buffer Pool] ──┘
For events created in Input Runner threads but consumed in Processor Runner threads, we implement a double-buffer strategy:
Performance Impact:
| Aspect | Traditional | LoongCollector | Improvement |
|---|---|---|---|
| Object creation | Per event | Pool reuse | 90% reduction |
| Memory fragmentation | High | Minimal | 80% reduction |
Traditional Approach: Standard serialization involves creating intermediate Protobuf objects before converting to network bytes. This two-step process requires additional memory allocations and CPU cycles for object construction and serialization, leading to unnecessary overhead in high-throughput scenarios.
LoongCollector's Zero-Copy Serialization: LoongCollector bypasses intermediate object creation by directly serializing PipelineEventGroup data according to Protobuf wire format. This eliminates the temporary object allocation and reduces memory pressure during serialization.
Architecture:
Traditional: PipelineEventGroup → ProtoBuf Object → Serialized Bytes → Network
LoongCollector: PipelineEventGroup → Serialized Bytes → Network
Performance Impact:
| Metric | Traditional | LoongCollector | Improvement |
|---|---|---|---|
| Serialization CPU | 12.5% | 5.8% | 54% reduction |
| Memory allocations | 3 copies | 1 copy | 67% reduction |
While LoongCollector demonstrates impressive performance advantages, its reliability architecture is equally noteworthy. The following sections detail how LoongCollector achieves enterprise-grade stability and fault tolerance while maintaining its performance edge.
LoongCollector's multi-tenant architecture ensures isolation between different pipelines while maintaining optimal resource utilization. The system implements a high-low watermark feedback queue mechanism that prevents any single pipeline from affecting others.
Multi-Pipeline Architecture with Independent Queues:
┌─ LoongCollector Multi-Tenant Pipeline Architecture ───────────────────┐
│ │
│ ┌─ Pipeline A ─┐ ┌─ Pipeline B ─┐ ┌─ Pipeline C ─┐ │
│ │ │ │ │ │ │ │
│ │ Input Plugin │ │ Input Plugin │ │ Input Plugin │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Process Queue│ │ Process Queue│ │ Process Queue│ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Sender Queue │ │ Sender Queue │ │ Sender Queue │ │
│ │ ↓ │ │ ↓ │ │ ↓ │ │
│ │ Flusher │ │ Flusher │ │ Flusher │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │ │
│ └───────────────────┼─────────────────┘ │
│ │ │
│ ┌─ Shared Runners ────────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ Input Runners ─┐ ┌─ Processor Runners ┐ ┌─ Flusher Runners ─┐ │ │
│ │ │ • Pipeline │ │ • Priority-based │ │ • Watermark-based │ │ │
│ │ │ isolation │ │ scheduling │ │ throttling │ │ │
│ │ │ • Independent │ │ • Fair resource │ │ • Back-pressure │ │ │
│ │ │ event pools │ │ allocation │ │ control │ │ │
│ │ └─────────────────┘ └────────────────────┘ └───────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
High-Low Watermark Feedback Queue Mechanism:
┌─ High-Low Watermark Feedback System ─────────────────────┐
│ │
│ ┌─ Queue State Management ─┐ ┌─ Feedback Mechanism ──┐ │
│ │ │ │ │ │
│ │ ┌─── Normal State ───┐ │ │ ┌──── Upstream ────┐ │ │
│ │ │ Size < Low │ │ │ │ Check │ │ │
│ │ │ Accept all data │ │ │ │ Before Write │ │ │
│ │ └────────────────────┘ │ │ └──────────────────┘ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌── High Watermark ──┐ │ │ │ │
│ │ │ Size >= High │ │ │ ┌──── Downstream ──┐ │ │
│ │ │ Stop accepting │ │ │ │ Feedback Enabled │ │ │
│ │ │ non-urgent data │ │ │ └──────────────────┘ │ │
│ │ └────────────────────┘ │ │ │ │
│ │ │ │ │ │ │
│ │ ▼ │ │ │ │
│ │ ┌─ Recovery State ──┐ │ │ │ │
│ │ │ Size <= Low │ │ │ │ │
│ │ │ Resume accepting data │ │ │ │
│ │ └───────────────────┘ │ │ │ │
│ └──────────────────────────┘ └───────────────────────┘ │
└──────────────────────────────────────────────────────────┘
Isolation Benefits:
Enterprise environments run multiple pipelines with different criticality levels. Our priority-aware round-robin scheduler ensures fairness while respecting business priorities. The system implements a sophisticated multi-level scheduling algorithm that guarantees resource allocation fairness while maintaining strict priority enforcement.
Priority Scheduling Principles
The core scheduling algorithm ensures both fairness within priority levels and strict priority enforcement between levels. The system follows strict priority ordering while maintaining fair round-robin scheduling within each priority level.
┌─ High Priority ────────────────────────────────────────────────────┐
│ ┌───────────┐ │
│ │ Pipeline1 │ ◄─── Always processed first │
│ └───────────┘ │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Medium Priority (Round-robin cycle) ──────────────────────────────┐
│ ┌───────────┐ ┌─────────────────┐ ┌────────────┐ │
│ │ Pipeline2 │───▶│ Pipeline3(Last) │───▶│ Pipeline 4 │ │
│ └───────────┘ └─────────────────┘ └────────────┘ │
│ ▲ │ │
│ └────────────────────────────────────────┘ │
│ │
│ Note: Last processed was Pipeline3, so next starts from Pipeline4 │
│ │ │
│ ▼ (Priority transition) │
└────────────────────────────────────────────────────────────────────┘
┌─ Low Priority (Round-robin cycle) ─────────────────────────────────┐
│ ┌───────────┐ ┌───────────┐ │
│ │ Pipeline5 │───▶│ Pipeline6 │ │
│ └───────────┘ └───────────┘ │
│ ▲ │ │
│ └───────────────────┘ │
│ │
│ Note: Processed only when higher priority pipelines have no data │
└────────────────────────────────────────────────────────────────────┘
When one destination fails, traditional agents often affect all pipelines. LoongCollector implements adaptive concurrency limiting per destination.
AIMD Based Flow Control:
┌─ ConcurrencyLimiter Configuration ───────────────────────────────────────┐
│ │
│ ┌─ Failure Rate Thresholds ────────────────────────────────────────────┐ │
│ │ │ │
│ │ ┌─ No Fallback Zone ─┐ ┌─ Slow Fallback Zone ─┐ ┌─ Fast Fallback ──┐ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ 0% ─────────── 10% │ │ 10% ──────────── 40% │ │ 40% ─────── 100% │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ Maintain Current │ │ Multiply by 0.8 │ │ Multiply by 0.5 │ │ │
│ │ │ Concurrency │ │ (Slow Decrease) │ │ (Fast Decrease) │ │ │
│ │ └────────────────────┘ └──────────────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Recovery Mechanism ─┐ │
│ │ • Additive Increase │ ← +1 when success rate = 100% │
│ │ • Gradual Recovery │ ← Linear scaling back to max │
│ └──────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
Each concurrency limiter uses an adaptive rate limiting algorithm inspired by AIMD (Additive Increase, Multiplicative Decrease) network congestion control. When sending failures occur, the concurrency is quickly reduced. When sends succeed, concurrency gradually increases. To avoid fluctuations from network jitter, statistics are collected over a time window/batch of data to prevent rapid concurrency oscillation.
By using this strategy, when network anomalies occur at a sending destination, the allowed data packets for that destination can quickly decay, minimizing the impact on other sending destinations. In network interruption scenarios, the sleep period approach maximizes reduction of unnecessary sends while ensuring timely recovery of data transmission within a limited time once the network is restored.
LoongCollector has been validated in some of the world's most demanding production environments, processing real-world workloads that would break most observability systems. As the core data collection engine powering Alibaba Cloud SLS (Simple Log Service)—one of the world's largest cloud-native observability platforms—LoongCollector processes observability data for tens of millions of applications across Alibaba's global infrastructure.
Global Deployment Scale:
Enterprise Customer Validation:
Extreme Scenario Testing:
Scalability
Network Resilience
Chaos Engineering
LoongCollector represents more than just performance optimization—it's a fundamental rethinking of how observability data should be collected, processed, and delivered at scale. By open-sourcing this technology, we're democratizing access to enterprise-grade performance that was previously available only to the largest tech companies.
Ready to experience 10x performance improvements?
🚀 GitHub Repository: https://github.com/alibaba/loongcollector
📊 Benchmark Suite: Clone our complete benchmark tests and reproduce these results in your environment
📖 Documentation: Comprehensive guides for migration, optimization, and advanced configurations
💬 Community Discussion: Join our Discord for technical discussions and architecture deep-dives
Challenge us: If you're running Filebeat, FluentBit, or Vector in production, we're confident LoongCollector will deliver significant improvements in your environment. Run our benchmark suite and let the data speak.
Contribute: LoongCollector is built by engineers, for engineers. Whether it's performance optimizations, new data source integrations, or reliability improvements—every contribution shapes the future of observability infrastructure.
Open Questions for the Community:
Benchmark Challenge: We're confident in our numbers, but we want to see yours. Run our benchmark suite against your current setup and share the results. If you can beat our performance, we'll feature your optimizations in our next release.
The next time your log collection agent consumes more resources than your actual application, remember: there's a better way. LoongCollector proves that high performance and enterprise reliability aren't mutually exclusive—they're the foundation of modern observability infrastructure.
Built with ❤️ by the Alibaba Cloud Observability Team. Battle-tested across Hundreds PB of daily production data and tens of millions of instances.
For large ranges, if you don't want to apply the formula for the whole row/column, you can select the start of the range, then use the slider to go to the end of the range. Press SHIFT and select the end of the range. This selects the whole range. Then you can press CTRL + ENTER to apply the formula.
Use svn changelist. best tool ever to solve this.
Check this:
https://github.com/tldr-pages/tldr/blob/main/pages/common/svn-changelist.md
And if you want to see the list added, use svn status
The documents are not that clear. But I have just finish a script to split a larget svn commit into small commits
I think you must URL-encode the query parameters before sending the request to the server.
For example, the Arabic character "أ" (U+0623) should be percent-encoded as %D8%A3.
So instead of idNo=2/أ, send idNo=2/%D8%A3.
For resharper there is:
- ReSharper_UnitTestRunFromContext (run at cursor)
- ReSharper_UnitTestDebugContext (debug at cursor)
- ReSharper_UnitTestSessionRepeatPreviousRun
Here is the right way to ask Bloomberg:
holidays = blp.bds(
'USD Curncy',
'CALENDAR_NON_SETTLEMENT_DATES',
SETTLEMENT_CALENDAR_CODE='FD',
CALENDAR_START_DATE='20250101',
CALENDAR_END_DATE='20261231'
)
print(holidays)
N.B. 'FD' is the calendar for the US.
You will get a DataFrame, with the column 'holiday_date' and the different dates written in format yyyy-mm-dd
https://stackoverflow.com/a/79704676/17078296
have you checked if your vendor folder is excluded from language server features
2025 Update
According to the Expo documentation, use:
npx expo install --fix
Tip: Run this command multiple times — it may continue updating dependencies on each pass until everything is aligned with the expected versions.
What helped me was to change the Java version.
I downgraded to Java 11 from Java 17 because Java 17 introduced stricter formatting rules and it started to fail at class initialization time (static block), causing a NoClassDefFoundError.
IMO the author tag helps in situations where the code is visualized on a non IDE place, like on GitHub, Bash CLI or tools like notepad++, the tag here gives a direct help about the origins of the thing (interface/class/method). The VCS is helpful but it requires learning curve and IDE.
From another perspective, the author tag may help in enforcing the Open-Close principle in SOLID, for example when a developer comes to touch an interface marked `@author <senior_developer>`, they'd think to not touch it as it was designed by a high level person, so maybe that touching it, may ruin important things, eventually, they will think about making an extension of it, which is great.
thanks for your comments - they were very useful and push my brain into right direction.
If shortly: original c++ code creates an image in WMF format (from Windows 3.0, you remember it, right?). I changed the c++ and started to generated EMF files (came from Windows 95).
For example, this code
CMetaFileDC dcm;
dcm.Create();
has been replaced to this one:
CMetaFileDC dcm;
dcm.CreateEnhanced(NULL, NULL, NULL, _T("Enhanced Metafile"));
I walked through all related locations and now I have EMP as the output format.
And this step has solved all my issues, I even do not convert this EMF file to BMP format, I can paste/use it directly into my C# code.
Thanks again for your thoughts and ideas, I really appreciate.
I had the same issue and found out it was caused by a dependency conflict — in my case, it was the devtools dependency.
After removing it, everything went back to working normally.
SOLVED!
Thanks to Wayne from the comments, he guided me into CUDA Program files.
Some of the cuDNN files from downloaded 8.1 version weren't present in
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include
What worked:
Downloading a new cuDNN 8.1 .zip file from NVIDIA website
Extracting it into Downloads/
Copying files from bin/; include/ and lib/x64 into corresponding directories in
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\
Thats it.
It might be because I hit the chat limit, since I had the same problem today and nothing else explains it.
Lista de ações tentadas:
"Tried ending the VS Code task in the Task Manager."
"Changing the Copilot version."
"Uninstalling and reinstalling Copilot."
"Reloading the VS Code window."
Update from 7th of August 2025:
I have a firebase cloud function for handling RTDN(real time developer notifications) but i got an error message that i don't have the required permissions. AI tools could not really help me all the way. With the info I could get from them and the answeres from people in this post that I am writing this answer for I ended up getting it to work like this:
In the google cloud console under Navigation Menu -> IAM & Admin -> IAM i searched for an entry with a Principal like this "[email protected]" and a name like "App Engine default service account".
Then i went to the google play console app-list screen -> Users and permissions -> next to "manage users" there are 3 vertical dots -> clicked on them and selected "invite new users" -> for email address i entered "[email protected]", under account permissions i only chose "View app information and download bulk reports (read only)", "View financial data, orders and cancellation survey responses" and "Manage orders and subscriptions" and pressed the button to invite the user.
Then in the google play console i went to the app in question -> to the subscription (in my case i only have 1 anyway) and deactivated and reactivated it and after a few minutes it worked for me.
Hope this might help someone in the future.
I'm not sure when support for setToken ended, but in later versions of Python dbutils, it's definitely no longer supported. As a matter of fact, it's hard to find any references to it in the official documentation & GitHub.