79687118

Date: 2025-07-02 08:53:00
Score: 4
Natty: 4
Report link

Is there a line like

this.model.on('change', this.render, this);

in the code, or is listenTo() being used to listen for changes?

Reasons:
  • Low length (1):
  • Has code block (-0.5):
  • Ends in question mark (2):
  • Starts with a question (0.5): Is there a
  • Low reputation (1):
Posted by: Mia

79687112

Date: 2025-07-02 08:47:58
Score: 5
Natty:
Report link

I am also working on this and referencing the nfclib source code.
Here is my project: https://github.com/JamesQian1999/macOS-NFC-Tool

Reasons:
  • Probably link only (1):
  • Contains signature (1):
  • Low length (1.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: James

79687109

Date: 2025-07-02 08:45:57
Score: 2.5
Natty:
Report link

hm. I can reproduce this behaviour & confirm that your answer solves this - as I’m currently tackling the same problem. And somehow it makes sense that it behaves like this, given the possibilities to automatically or manually acknowledge the record.

However, it seems quite uncomfortable to set the max poll records to 1 if I want to have an accurate lag metric - given the fact that the default is 500 for a reason. I assume that this would cause negative performance impact. Therefore I’m asking myself whether there is any simpler solution, anybody came across to measure the lag.

I thought about measuring it on the server side of my Confluent Kafka with JMX but this is also a bit of an overhead. Any other ideas or best-practices how one can solve this issue - and get an accurate consumer-lag? My goal is to raise alerts if consumer-lags build up.

Reasons:
  • RegEx Blacklisted phrase (1): I want
  • Long answer (-0.5):
  • No code block (0.5):
  • Contains question mark (0.5):
  • Low reputation (1):
Posted by: emaarco

79687107

Date: 2025-07-02 08:44:57
Score: 1
Natty:
Report link

Based on @Cory 's reply I made a function that returns with the same date from the previous month. You just need datetime library:

from datetime import datetime, timedelta

def same_dt_in_prev_month(dt: datetime) -> datetime:
    orig_day = dt.day
    return (dt - timedelta(days=orig_day)).replace(day=orig_day)

one_month_ago = same_dt_in_prev_month(datetime.now())
print(one_month_ago)
Reasons:
  • Has code block (-0.5):
  • User mentioned (1): @Cory
  • Low reputation (0.5):
Posted by: andexte

79687106

Date: 2025-07-02 08:42:56
Score: 3
Natty:
Report link

I have encountered the same problem. The reason for this is that the Beale index cannot be calculated. The function will work properly if this index is excluded.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: gulcanp

79687099

Date: 2025-07-02 08:38:55
Score: 1.5
Natty:
Report link

An Alternative for Instant would be to use ZonedDateTime which is also present in the java.time package and can easily converted to an Instant.

import java.time.*;

Instant OneYearFromNow = ZonedDateTime.now().plusYears(1).toInstant();

which is same response i gave for similar issue https://stackoverflow.com/a/79686982/7000165

Reasons:
  • Blacklisted phrase (1): stackoverflow
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Valentine Tobah

79687092

Date: 2025-07-02 08:33:53
Score: 1
Natty:
Report link

An Easy solution to this problem is to

  1. First install your desired verison of python from python.org

  2. During the installation wizard check the add to path option at the bottom of the wizard so that you dont manually have to add python to path from environment variables

  3. Go to the directory where you want to create the python environment

  4. Run the command py -0 to check which python versions you have available

  5. Then run the command "py -//python version// -m venv //environment name// "---> "py -3.9 -m venv env3.9" to create a virtual environment which uses your desired python version

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Fauzan Tahir

79687088

Date: 2025-07-02 08:28:52
Score: 4
Natty: 5
Report link

What solved it for me whas to increase 'max_input_vars' in php.ini

Reasons:
  • Low length (1.5):
  • No code block (0.5):
  • Single line (0.5):
  • Starts with a question (0.5): What solve
  • Low reputation (1):
Posted by: Franta

79687084

Date: 2025-07-02 08:24:50
Score: 2.5
Natty:
Report link

A little bit late, but you probably have to set the msms binary into the path. I also didn't manage to run it as msms, cause it has another name, so I change the name of the binary to msms and everything ran fine.

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Carlos Chacon

79687082

Date: 2025-07-02 08:23:50
Score: 2.5
Natty:
Report link
{  provide: DATE_PIPE_DEFAULT_OPTIONS,  useValue: {dateFormat: 'dd.MM.yyyy'}}

is the modern way https://angular.dev/api/common/DATE_PIPE_DEFAULT_OPTIONS for global config

Reasons:
  • Probably link only (1):
  • Low length (1):
  • Has code block (-0.5):
  • Starts with a question (0.5): is the
  • Low reputation (0.5):
Posted by: Desperado

79687081

Date: 2025-07-02 08:22:50
Score: 2.5
Natty:
Report link
{  provide: DATE_PIPE_DEFAULT_OPTIONS,  useValue: {dateFormat: 'dd.MM.yyyy'}}

is the modern way https://angular.dev/api/common/DATE_PIPE_DEFAULT_OPTIONS

Reasons:
  • Probably link only (1):
  • Low length (1):
  • Has code block (-0.5):
  • Starts with a question (0.5): is the
  • Low reputation (0.5):
Posted by: Desperado

79687080

Date: 2025-07-02 08:22:50
Score: 2.5
Natty:
Report link
{  provide: DATE_PIPE_DEFAULT_OPTIONS,  useValue: {dateFormat: 'dd.MM.yyyy'}}

is the modern way https://angular.dev/api/common/DATE_PIPE_DEFAULT_OPTIONS

Reasons:
  • Probably link only (1):
  • Low length (1):
  • Has code block (-0.5):
  • Starts with a question (0.5): is the
  • Low reputation (0.5):
Posted by: Desperado

79687070

Date: 2025-07-02 08:18:49
Score: 2
Natty:
Report link

You can use git tag and git hook to workaround.

I wrote a blog post guide you how to do it.

https://www.reactkeyblog.com/en/posts/how-to-automatically-add-commit-numbers-in-git-fork-%7C-reproduce-perforce-changelist-with-git-tags/296f5b54-e10e-4d6c-8eea-1cde1f886efb#blog-title

Reasons:
  • Whitelisted phrase (-1.5): You can use
  • Probably link only (1):
  • Low length (1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Key Eric

79687067

Date: 2025-07-02 08:15:48
Score: 1
Natty:
Report link

Do this for KullaniciKayit. You are getting undefined because return Ok(result) returns ActionResult but your result is not

[HttpPost]
public async Task<ActionResult<Kullanici>> KullaniciKayit(Kullanici model)
{
    var result = await _kullaniciService.KullaniciKayit(model);
    return Ok(result);
}
Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Caner

79687062

Date: 2025-07-02 08:09:46
Score: 0.5
Natty:
Report link

Extending @CreeperInATardis answer, that is correct.

multipathd has following commands that produce JSON:

First one multipathd list multipaths json lists all multipath devices and prints JSON like this:

# multipathd list multipaths json
{
   "major_version": 0,
   "minor_version": 1,
   "maps": [{
      "name" : "mpathi",
      "uuid" : "366c4a7405155413030303135933856ef",
      "sysfs" : "dm-1",
...
   },{
      "name" : "mpathaa",
      "uuid" : "366c4a7405155413030303135a0b0c10d",
      "sysfs" : "dm-2",
...
   }]
}

(output is very large so I cut it out)

Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • User mentioned (1): @CreeperInATardis
  • Low reputation (0.5):
Posted by: Leonid K

79687061

Date: 2025-07-02 08:09:46
Score: 11
Natty: 5.5
Report link

Has anyone found a solution to this yet? Is there any extension available to achieve this?

Reasons:
  • Blacklisted phrase (1): Is there any
  • Blacklisted phrase (2): anyone found
  • RegEx Blacklisted phrase (3): Has anyone found
  • Low length (1):
  • No code block (0.5):
  • Ends in question mark (2):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Zrzeda

79687047

Date: 2025-07-02 07:59:43
Score: 0.5
Natty:
Report link

Save the symbols in a file:

objcopy --only-keep-debug [executable] [executable].dbg

Strip the symbols from the executable:

objcopy --strip-debug [executable]

Add a link to the symbols file to the executable:

objcopy --add-gnu-debuglink=[executable].dbg [executable]

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: thomas

79687034

Date: 2025-07-02 07:52:40
Score: 2
Natty:
Report link

Seems like you can use Insta-loader for querying your saved videos.

You can check out this thread: https://www.reddit.com/r/DataHoarder/comments/mygsu4/instagram_saved_folder_scraper/

Reasons:
  • Whitelisted phrase (-1.5): you can use
  • Probably link only (1):
  • Low length (1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Masoom Pophaley

79687028

Date: 2025-07-02 07:48:38
Score: 0.5
Natty:
Report link

for x in myList is looping over elements which are of list type, which does not have lstrip method.

You will have to adapt like this : myList = [[x[0].lstrip()] for x in myList if len(x) > 0]

In my example, It will only lstrip if len of list is bigger than 0. In case list is empty, it will be ignored

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: RCDevs Security

79687022

Date: 2025-07-02 07:42:37
Score: 1.5
Natty:
Report link

This answer is from @furas' comment to the question.

Set the address in javascript to /static/PERDNI/09401576.jpg.

{% static ... %} is Python Django's syntax, so javascript doesn't understand that. So, we need to set the address directly to the file in javascript.

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • User mentioned (1): @furas'
  • Low reputation (0.5):
Posted by: djmoon13

79687019

Date: 2025-07-02 07:41:36
Score: 1.5
Natty:
Report link

n this fast-paced world, the digital world is deeply connected with mobile applications. These mobile apps have penetrated not only into our daily lives, making them much better, but also into our workplace and businesses. Many entrepreneurs, budding, or even well-established entrepreneurs have started embracing these changes in their businesses. To embrace these changes more positively, the business people are often on the lookout for the Best Mobile App Development Company.

Are you also facing struggles on your way to accomplish your business goals, as it is hard for you to understand the gamut of mobile apps? Well, you need not worry at all as we have compiled several bits of useful and engaging information that will lighten your path for choosing the best mobile app development services.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Contains question mark (0.5):
  • Low reputation (1):
Posted by: spammer

79687002

Date: 2025-07-02 07:32:34
Score: 3.5
Natty:
Report link

@a-haworth's suggestion to use border-box was great, but for reasons I don't quite understand yet, it didn't work on it's own. Instead, I had to switch from using flex: 1 & flex: 3 to get my sizing ratio to instead use flex: 1 1 25% and flex: 1 1 75%. The auto flex basis seems to have been problematic? Not quite sure.

body {
  min-height: 100vh;
  margin: 0;
  
  /* Just for testing */
  width: 760px;
}

.container {
  display: flex;
  min-height: 100vh;
}

.sidebar {
  flex: 1 1 25%;
  
  /* All Good! */
  box-sizing: border-box;
  padding: 0.5rem 1rem;
  max-width: calc(200px);

  background-color: lightsteelblue;
}

main {
  flex: 1 1 75%;
  
  padding: 0.5rem 1rem;
}
<div class="container">
  <aside class="sidebar">Sidebar Content</aside>
  <main>Main Content</main>
</div>

Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Ends in question mark (2):
  • User mentioned (1): @a-haworth's
  • Self-answer (0.5):
  • Looks like a comment (1):
Posted by: Devildude4427

79687001

Date: 2025-07-02 07:32:34
Score: 1
Natty:
Report link

In Android Kotlin you can change the Button tint programmatically like--

I am using the Jetpack View binding..

binding.imgEdit.backgroundTintList= ContextCompat.getColorStateList(context, R.color.green)
Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Aakash

79687000

Date: 2025-07-02 07:31:33
Score: 11.5
Natty: 5.5
Report link

Hi i am facing the same error, please guide me.. i am using windows

Reasons:
  • Blacklisted phrase (1): guide me
  • RegEx Blacklisted phrase (2.5): please guide me
  • RegEx Blacklisted phrase (1): i am facing the same error
  • RegEx Blacklisted phrase (1): i am facing the same error, please
  • Low length (1.5):
  • No code block (0.5):
  • Me too answer (2.5): i am facing the same error
  • Single line (0.5):
  • Low reputation (1):
Posted by: anushkha thakur

79686998

Date: 2025-07-02 07:31:33
Score: 2.5
Natty:
Report link

https://comparexml.com/

It is an online tool specifically for comparing XML formats, which allows you to clearly see the differences between two XML files.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Unit Stack

79686992

Date: 2025-07-02 07:26:32
Score: 2
Natty:
Report link

Use innerHTML (MDN docs).

Example:

let mapScript = document.createElement('script');
mapScript.innerHTML = 'the whole script body content';
Reasons:
  • Probably link only (1):
  • Low length (1):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: biggujo

79686987

Date: 2025-07-02 07:23:31
Score: 1
Natty:
Report link

To view the data in the Core data database, first locate the path of the SQLite database for our project. This can be done by following the steps.

  1. Select your project target.

  2. Click on edit schema.

  3. In the run destination, select arguments, and in that, you can see environment variables.

  4. Add a new environment variable com.apple.CoreData.SQLDebug and set it value to 1.

  5. Now close and run the app in the simulator.

  6. In the console now you can now see all the logs related to core data.

  7. Search for CoreData: annotation: Connecting to sqlite database file at. This will give you the exact path for your database.

enter image description here

Great, you have done your first step, and now you have the path to the SQLite database of your project. To view the data in the database, you can use any of the free SQLite database viewers.

I am using DB Browser. You can download it using this link https://sqlitebrowser.org/.

Now go to the path that has been printed in the previous step using Finder, there you will see the .sqlite file. Double tap on it to open in the DB Browser. That's it, now you can inspect your database structure and values stored in it.

Reasons:
  • Blacklisted phrase (1): enter image description here
  • Blacklisted phrase (1): this link
  • Whitelisted phrase (-1.5): you can use
  • Long answer (-1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Raguraman

79686985

Date: 2025-07-02 07:20:30
Score: 1.5
Natty:
Report link

I would say the first thing to check would be to go to this website:
https://marketplace.visualstudio.com/vscode if it doesnt load or you have a connection issue it could be due to a vpn that you have installed.
VS Code uses its own Node.js-based backend. Sometimes it can't connect even if your browser can. So you could try opening the VS Code terminal and run:
curl https://marketplace.visualstudio.com/\_apis/public/gallery/extensionquery
if that fails then there is a network restriction or a DNS issue inside your environment.

Reasons:
  • No code block (0.5):
  • Low reputation (1):
Posted by: Milo Campbell

79686982

Date: 2025-07-02 07:19:30
Score: 0.5
Natty:
Report link

An Alternative Solution for Instant would be ZonedDateTime which is also present in the java.time package

import java.time.*;

Instant OneYearFromNow = ZonedDateTime.now().plusYears(1).toInstant();
Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Valentine Tobah

79686981

Date: 2025-07-02 07:18:30
Score: 0.5
Natty:
Report link

SQS uses envelope encryption, so the producer needs kms:GenerateDataKey to create a data key to encrypt the messages it sends, and it needs kms:Decrypt to verify the data key's integrity. It doesn't need kms:Encrypt, because it uses the data key to do the encryption.

The consumer just needs kms:Decrypt to decrypt the encrypted data key and then it can decrypt the messages using that data key.

So the repost doc is correct.

How is the application able to function correctly with the permissions 'reversed' like this?

My guess would be that either your queue isn't set up for SSE-KMS encryption, or your KMS key has the necessary permissions defined in it's key policy.

Are there any pitfalls or potential problems with this arrangement I need to be aware of?

Assuming the queue is encrypted, then you've got duplicate permissions defined in different places which isn't ideal, and you've got permissions defined that you don't need (e.g. neither producer nor consumer need kms:Encrypt).

Reasons:
  • Blacklisted phrase (0.5): I need
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Low reputation (0.5):
Posted by: andycaine

79686978

Date: 2025-07-02 07:17:29
Score: 0.5
Natty:
Report link

My first impression was that 100.000 objects can be a lot, try to reduce it to something absurd like 1000 and see if it reproduces. Also you are not overflowing to disk (overflowToDisk=false), that is a hard limit to swapping which would explain the OOM error. Don't use eternal="true" unless you know very well the amount and size of objects you are storing, you are preventing the cache from cleaning up less used objects. Also remember to check -Xmx and -Xms in the JVM.

With the information you provide seems more like a code companion (IA) resoluble question.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (0.5):
Posted by: Raul Lapeira Herrero

79686975

Date: 2025-07-02 07:15:28
Score: 3
Natty:
Report link

I had the same issue and was very frustrated - I'm using dates in Sheets a lot! Thank you for the solution, it works perfect!

Reasons:
  • Blacklisted phrase (0.5): Thank you
  • Whitelisted phrase (-1): I had the same
  • Low length (1):
  • No code block (0.5):
  • Unregistered user (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: user30945448

79686964

Date: 2025-07-02 07:03:25
Score: 0.5
Natty:
Report link
dpkg -l | grep nvidia-jetpack

Use the above instead, it shows the actual installed JetPack version of the machine.

sudo apt-cache show nvidia-jetpack

The line above only shows the cached packages of the installed JetPack, and may have shown a few in the list. Just sudo apt clean it if you want to.

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Jason Tan

79686957

Date: 2025-07-02 06:58:24
Score: 1
Natty:
Report link
  1. Open the settings of the Terminal app (`Command` + `,`)

  2. Profiles

  3. Go to "Keyboard" tab

  4. Uncheck "Use Option as Meta key"

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Self-answer (0.5):
  • High reputation (-1):
Posted by: Nils Reichardt

79686940

Date: 2025-07-02 06:52:21
Score: 7
Natty: 6
Report link

I am also facing the same issue. Could you find a solution to this problem? I also try different parameters in API but it doesn't work.

Reasons:
  • Blacklisted phrase (1): I am also facing the same issue
  • Low length (1):
  • No code block (0.5):
  • Me too answer (2.5): I am also facing the same issue
  • Contains question mark (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: user30945117

79686939

Date: 2025-07-02 06:51:21
Score: 0.5
Natty:
Report link

To resolve this issue, I updated my app's build.gradle file to target the required API level:

android {
    compileSdkVersion 35
    defaultConfig {
        targetSdkVersion 35
    }
}

But you still got the warning then please remove the older bundles from the open close testing.

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Nikunj Panchal

79686938

Date: 2025-07-02 06:50:20
Score: 2
Natty:
Report link

Yeah, figured, it is possible but I lack the expertise, however I found that you need to have su privilages for the app, which is impossible to provide in these devices , they are closed source and very little documentation is available from manufacturer, I even tried adb and all other ways (tried rooting too, but failed misearbly),somehow, accesing the serial port via termux is possible,even tried background terminal process)(too slow for me),
Later I found a plugin called SPUP ,a uart bridge approach(my last resort), but it is smart and can adapt to all platforms without any code changes,which hopefully worked

So the problem is currently solved, but still open to suggestion and alternatives.
Thanks

Reasons:
  • Blacklisted phrase (0.5): Thanks
  • Long answer (-0.5):
  • No code block (0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: Club Unity

79686936

Date: 2025-07-02 06:50:20
Score: 2.5
Natty:
Report link

The issue lay in ModSecurity. It was set to "Detection only" with the default OWASP ruleset, but even so, it appeared to throw some kind of error. I have been able to resolve it by setting ModSecurity to Off, or to a different ruleset like Atomic Standard (and then it can be fully on, yielding no problems).

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Self-answer (0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: Quan

79686930

Date: 2025-07-02 06:46:19
Score: 2
Natty:
Report link

I am facing the same issue, I have attached the log for this issue. UUID is not loadable by azure function apps

import uuid
2025-07-02T06:28:33.900 [Information] File "/home/site/wwwroot/.python_packages/lib/site-packages/uuid.py", line 138
2025-07-02T06:28:33.900 [Information] if not 0 <= time_low < 1<<32L:
2025-07-02T06:28:33.900 [Information] ^
2025-07-02T06:28:33.900 [Error] SyntaxError: invalid decimal literal
2025-07-02T06:28:33.900 [Information] Traceback (most recent call last):
2025-07-02T06:28:33.900 [Information] File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/main.py", line 61, in main
2025-07-02T06:28:33.900 [Information] return asyncio.run(start_async(
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
2025-07-02T06:28:33.900 [Information] return runner.run(main)
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
2025-07-02T06:28:33.900 [Information] return self._loop.run_until_complete(task)
2025-07-02T06:28:33.900 [Information] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-07-02T06:28:33.900 [Information] File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
2025-07-02T06:28:33.900 [Information] return future.result()
Reasons:
  • Long answer (-1):
  • Has code block (-0.5):
  • Me too answer (2.5): I am facing the same issue
  • Low reputation (1):
Posted by: Shwetank

79686929

Date: 2025-07-02 06:43:19
Score: 3
Natty:
Report link

String.join(", ", List.of("one", "two", "three"));

Reasons:
  • Low length (1.5):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: Julian

79686928

Date: 2025-07-02 06:43:19
Score: 3.5
Natty:
Report link

I found the problem. I had this in index.css.

.MuiInputLabel-outlined {
  transform: translate(12px, 14px) scale(1) !important;
}

How can I disable this only on DateTimePicker but leave it as it is on everything else?

Reasons:
  • Blacklisted phrase (0.5): How can I
  • Low length (0.5):
  • Has code block (-0.5):
  • Ends in question mark (2):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: user19590290

79686919

Date: 2025-07-02 06:39:18
Score: 1
Natty:
Report link

For me, it was enough to stop build and redo it. Maybe it was stuck for any connection issue.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Single line (0.5):
  • High reputation (-1):
Posted by: user2342558

79686916

Date: 2025-07-02 06:37:17
Score: 3
Natty:
Report link

If your database connection is defined with using environmental variables, you must use Rails console as the application user. I get this error when I try to use the console as root, but my app user is an unprivileged user.

Reasons:
  • RegEx Blacklisted phrase (1): I get this error
  • Low length (0.5):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: Jussi Hirvi

79686912

Date: 2025-07-02 06:34:16
Score: 2.5
Natty:
Report link

I also encountered this problem. The reason was that the word "referrer" was misspelled as "referer".
and i create a new google account, the result is ok now

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: xechoz zheng

79686899

Date: 2025-07-02 06:20:12
Score: 0.5
Natty:
Report link

You can set the system property -Djavax.net.debug=ssl:handshake when starting your Java application

For example you can run this in the command line

java -Djavax.net.debug=ssl:handshake -jar jarName.jar

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: KennyC

79686894

Date: 2025-07-02 06:16:11
Score: 3.5
Natty:
Report link

I think there is an error in answer https://stackoverflow.com/a/46590476/16460395. The file used in gzip, gerr := gzip.NewReader(file) will never close, because Reader.Close does not close the underlying io.Reader (https://pkg.go.dev/compress/gzip#Reader.Close).

Reasons:
  • Blacklisted phrase (1): stackoverflow
  • Probably link only (1):
  • Low length (1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Andrey Metelyov

79686892

Date: 2025-07-02 06:16:11
Score: 5
Natty:
Report link

I had to add prisma generate to the build command 😒😂

Reasons:
  • Blacklisted phrase (1): 😂
  • Low length (1.5):
  • No code block (0.5):
  • Self-answer (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: TobCraft3521

79686891

Date: 2025-07-02 06:15:10
Score: 2
Natty:
Report link

July 02 , 2025, Right now the latest sdk version is 35(Android 15).

targetSdkVersion = 35
Reasons:
  • Low length (1.5):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: ROHITH A O

79686882

Date: 2025-07-02 06:07:08
Score: 1
Natty:
Report link

You must own genuine Apple hardware even for virtualised macOS instances. Apple permits macOS VMs only on Apple-branded machines. Practically, even if you skirt licensing, GPU passthrough limits make the iOS 18 simulator crawl on most Hyper-V or KVM hosts.

If budget is the blocker, consider Apple’s Xcode Cloud or third-party CI services that compile and notarise your build remotely; you can still write code on Windows/Linux and push via Git. For Android work, Google’s latest system-requirements page says 8 GB RAM is minimum, 16 GB recommended, and any post-2017 Intel/AMD CPU with VT-x/AMD-V will handle the Emulator at 60 fps. Mixing these approaches keeps you license-clean and within NIST 800-163 guidance that discourages unvetted VMs for signing keys.

References

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Ankur Shrivastav

79686868

Date: 2025-07-02 05:45:03
Score: 8.5
Natty: 5
Report link

чуваки выше, спасибо вам!!!!!!

Reasons:
  • Low length (2):
  • No code block (0.5):
  • Unregistered user (0.5):
  • Has no white space (0.5):
  • Single line (0.5):
  • No latin characters (3.5):
  • Low reputation (1):
Posted by: user30944743

79686867

Date: 2025-07-02 05:45:03
Score: 1.5
Natty:
Report link
  1. Initialize your (JSON object from a SharePoint list) as an Array

  2. Parse the JSON Object

  3. Using Select - Data Operation to get values - Company, Date From, Date To and Title

  4. Append to string variable to get

    [
      {
        "Company": "Line2",
        "Date From": "2022-03-21",
        "Date To": "2022-03-29",
        "Title": "Title 2"
      },
      {
        "Company": "Test1",
        "Date From": "2022-03-30",
        "Date To": "2022-03-31",
        "Title": "Title 1"
      }
    ]enter image deenter image description herescription here
    
Reasons:
  • Blacklisted phrase (1): enter image description here
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Vijayamathankumar

79686866

Date: 2025-07-02 05:43:02
Score: 1
Natty:
Report link

the evaluation_strategy keyword argument in TrainingArguments has now been replaced with eval_strategy. Using the old argument causes:

TrainingArguments.__init__() got an unexpected keyword argument 'evaluation_strategy'
Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Maxjuster

79686860

Date: 2025-07-02 05:32:59
Score: 1.5
Natty:
Report link
from django import forms
from .models import Invoice_customer_list,Invoice_product_list


# Form for the Invoice model
class Invoice_customer_Form(forms.ModelForm):
    class Meta:
        model = Invoice_customer_list
        fields = ['customer']
        labels = {
            'customer': 'Select Customer',
            
        }

class Invoice_product_form(forms.ModelForm):
    class Meta:
        model = Invoice_product_list
        fields = ['product']
        labels = {
            'product':'Select Product'
        }
 



i want to createa form like this here is updated code 





Reasons:
  • RegEx Blacklisted phrase (1): i want
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: Kailase Kapil

79686859

Date: 2025-07-02 05:30:59
Score: 2
Natty:
Report link

We adopted a Gitflow-like process where we have a branch for each stage we deploy to:

Feature and bugfix branches are started from and merged back to development. These merges are squashed to simplify the Git history (optional). Then, PRs are made from development to test and from test to acceptance to promote these new releases. Each branch triggers its own build and release. This setup allows us to still hotfix specific environments in case of an urgent problem. When merging from development to another environment, we use a normal merge commit to preserve the Git history. This way, new commits just get added to different branches. In order to make sure teammates don't make mistakes when merging, the specific type of merge we want for the branches is specified in the branch protection policies. We do not use release branches or tag specific releases. Instead, we use conventional commit messages and a tool called GitVersion to automatically calculate semantic version numbers that we then use as build and release numbers.

Reasons:
  • RegEx Blacklisted phrase (2): urgent
  • Long answer (-1):
  • No code block (0.5):
  • Low reputation (0.5):
Posted by: Joren

79686858

Date: 2025-07-02 05:28:58
Score: 1
Natty:
Report link

ECS Game Domain specifics is, that you have more of all these things.

So component C++ is POD struct. And the key point is that you have a Container of POD.

And acces and managing POD’s is through the Container interface for example STD::ARRAY or STD::VECTOR. Prefering acces using container index not pointer way.

Because ECS and related cache locality using those cores and caches optimal.

Is update the components in system class that uses the container to process them in bulk.

You often won’t see any if all downsides of inheritance in very small scoped games like PONG.

But in ARMA or Homeworld scope of game with huge diversity of entities you will.

ECS is plural , EntityID’s , component’s so there is point there is mix of OOP but it at above container level.

Entities and components container interfaces and managers.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: SuperG316

79686838

Date: 2025-07-02 04:51:51
Score: 1.5
Natty:
Report link

MPMoviePlayerController supports some legacy formats better while AVPlayer requires proper HLS formatting. Check your .m3u8 stream for correct audio codec, MIME type, and HLS compliance for AVPlayer.

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Jack Qasim

79686826

Date: 2025-07-02 04:31:46
Score: 5.5
Natty: 6.5
Report link

Can you create a commit in a new file and create a github workflow to append that file text to original file when each commit is made?

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Ends in question mark (2):
  • Single line (0.5):
  • Starts with a question (0.5): Can you
  • Low reputation (1):
Posted by: Phalaksha C G

79686825

Date: 2025-07-02 04:30:46
Score: 2.5
Natty:
Report link

On terminal

#curl -s http://169.254.169.254/latest/meta-data/instance-id

gives the instance id, 169.254.169.254 is a special AWS internal IP address used to access the Instance Metadata Service (IMDS) from within an EC2 instance.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Muhammed Aslam C

79686822

Date: 2025-07-02 04:27:45
Score: 1
Natty:
Report link

Thanks for the explanation, Yong Shun. That clears things up. I was also under the impression that ngOnInit I would have the @Input() values are ready, but it makes sense now why it's still null at that point. I'll try using ngOnChanges or AfterViewInit Depending on the use case. The mention of Angular Signals is interesting too, I hadn't explored that yet. Appreciate the insights!

Reasons:
  • Blacklisted phrase (0.5): Thanks
  • Has code block (-0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: Laura Baird

79686819

Date: 2025-07-02 04:24:44
Score: 1.5
Natty:
Report link

1. Register the New Menu Location in functions.php

2. Assign the Menu in WordPress Admin

3.Add the Menu to the firstpage Template

Reasons:
  • Low length (1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: sweta merai

79686805

Date: 2025-07-02 04:05:40
Score: 2.5
Natty:
Report link

unlike express there in no req.body object. Therefore it is handle by request handlers. You can follow this article.

https://nodejs.org/en/learn/modules/anatomy-of-an-http-transaction#request-body

 let incomingData=[];
        req.on('data',chunk=>{
            incomingData.push(chunk)
        })
        .on('end',()=>{
            incomingData = Buffer.concat(incomingData)
            let name=JSON.parse(incomingData.toString())
            console.log("converted to string",name.name)
        })
Reasons:
  • Blacklisted phrase (1): this article
  • Probably link only (1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Manprabesh Boruah

79686804

Date: 2025-07-02 04:04:40
Score: 3
Natty:
Report link

if your input file has extension (.webm) use this :

ffmpeg -re -i "your_file.webm" -map 0 -c:a copy -c:v copy -single_file "true" -f dash "your_output.mpd"

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Unregistered user (0.5):
  • Low reputation (1):
Posted by: Ahmed

79686803

Date: 2025-07-02 04:03:39
Score: 2.5
Natty:
Report link

Fixed File Share access issue by creating Private Endpoint with Private DNS Zone; self-hosted Azure DevOps agent now has access

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: iluv_dev

79686802

Date: 2025-07-02 04:00:39
Score: 1
Natty:
Report link

We need at least 3 items to build the essential triangles that make up an AVL tree, and some data to store and search on.

How to politely say the basic data structure is wrong for any AVL tree? Each node of an AVL tree needs at least 5 items. 1. the data or a pointer to the data. the data MUST include a key. 2. a pointer to the the left child node. 3. a pointer to the right child node 4. a pointer to the PARENT node, which is missing 5. one or more items to enforce the balance of the tree. One could store these extra item(s) separately. As a side note, one could use an index into an array rather than a C style pointer. In any case, the code is a cascade miss design with error(s) in each function.

No doubt it compiles (?under which OS and compiler?). To debug and test, you will want to write one or a series of tree display sub programs. I'm currently building a general purpose 32/64 bit Intel AVL tree. I'm at about 2000 lines and not done [ verbose is my nickname ]. It is intended for symbol tables for a compiler. I did some web searches for code: found a lot of broken examples. Search on an AVL tree should be about m*O( lg N). Insert about m*O( 2 * lg N) because of retrace. Delete and other operations such as bulk load not needed for my intended use.

Reasons:
  • Long answer (-1):
  • No code block (0.5):
  • Contains question mark (0.5):
  • Low reputation (1):
Posted by: Jim Lecka

79686801

Date: 2025-07-02 03:58:38
Score: 0.5
Natty:
Report link

Some rule changed with Tailwindcss V4.

V4 Angular init tutorials with CSS file, and use tailwindcss/postcss plugin.

Different vs V3

  1. V4 not include/create tailwindcss.config.js file default, v4 need .postcss.json in Angular higher ver

  2. V4 tutorial with styles.css in Angular project root, not styles.scss. so some usage changed.

  3. styles.css need add @import "tailwindcss"; not like v3 add some "@tailwindcss "xxx"; "

  4. V4 with CSS file in Angular Component, need add other statement: "@import "tailwindcss"" if you want to use @apply

  5. The "dark" mode in manual setting, the tailwindcss v4 need add "@custom-variant dark (&:where(.dark, .dark *));" this statement in styles.css;

  6. But if you want to use dark: in other component (lazy routing or lazy component), you must add "@import "tailwindcss";" in other css file(.css) and muse add "@custom-variant dark (&:where(.dark, .dark *));" again.

Add TailwindCSS v4 in Angular Higher Version flows:

Reasons:
  • Long answer (-1):
  • Has code block (-0.5):
  • User mentioned (1): @import
  • User mentioned (0): @apply
  • Low reputation (1):
Posted by: xhzhang zhang

79686800

Date: 2025-07-02 03:55:37
Score: 0.5
Natty:
Report link

You may want to try adding the resizable window flag on window creation.

SDL_WINDOW_RESIZABLE

Like this:

window = SDL_CreateWindow("Test Window", 800, 800, SDL_WINDOW_BORDERLESS | SDL_WINDOW_RESIZABLE);
Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Andres Hernandez

79686798

Date: 2025-07-02 03:49:36
Score: 1.5
Natty:
Report link

if you want to set it globally for whole app use

<style>
        <item name="android:includeFontPadding">false</item>
</style>

inside your theme.xml

Reasons:
  • Low length (1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Dixit vara

79686793

Date: 2025-07-02 03:38:34
Score: 1.5
Natty:
Report link
dat<-as.data.frame(rexp(1000,0.2))
g <- ggplot(dat, aes(x = dat[,1])) 

g + geom_histogram(alpha = 0.2, binwidth = 5, colour = "black") +
 geom_line(stat = "bin", binwidth = 5, linewidth = 1)

I met the same problem. You define the stat = "bin" for geom_line. It will explain the caculation of geom_line as geom_histogram or geom_freqpoly.

Result:

enter image description here

Reasons:
  • Blacklisted phrase (1): enter image description here
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Quang Hà Trần

79686788

Date: 2025-07-02 03:31:32
Score: 1.5
Natty:
Report link

According to the Declaration Merging section in the TypeScript official documentation, it mentions:

Non-function members of the interfaces should be unique. If they are not unique, they must be of the same type. The compiler will issue an error if the interfaces both declare a non-function member of the same name, but of different types.

Reasons:
  • No code block (0.5):
  • Low reputation (1):
Posted by: Arslan

79686782

Date: 2025-07-02 03:23:30
Score: 2.5
Natty:
Report link

Absolutely none of these work for me. I downloaded they lasted Android Studio today 7/1/2025. I have no Idea what version it is because there are too many numbers to figure it out. This should not be that difficult.

I just want the toolbar with all of the icons to display at the top. It has icons for running in debug mode, adding/removing comment block, etc. I am not talking about the Menu bar that has File, Edit, View, etc. I want the icon bar or tool bar. Whatever you want to call it.

Reasons:
  • RegEx Blacklisted phrase (1): I want
  • No code block (0.5):
  • Low reputation (1):
Posted by: Harvy Ackermans

79686779

Date: 2025-07-02 03:17:29
Score: 1
Natty:
Report link

If we track all possibilities, then

first if condition gives us

T(n)=O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)

second gives us

T(n)=2*O(n/2)+T(n/2) equivalent to T(n)=O(n)+T(n/2)

for the third one

You can easily see that all possibilities will be equivalent to T(n)=O(n)+T(n/4).

From these recursions you can deduce that T(n)=O(n) i.e. the time complexity is linear.

On your merge sort analogy: The array is being broken in a similar way but if you observe carefully we don't operate on each chunk unlike merge sort. Basically at each of logn levels in merge sort we are dealing with all n of them while here with n/(2^i) i.e. decay exponentially.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Advait Gupta

79686777

Date: 2025-07-02 03:15:28
Score: 4.5
Natty:
Report link
import os
import re
import asyncio
import logging
import time
import gc
from pathlib import Path
from telethon import TelegramClient, events
from telethon.tl.types import MessageMediaDocument, InputDocumentFileLocation
from telethon.tl.functions.upload import GetFileRequest
from telethon.crypto import AES
from telethon.errors import FloodWaitError
import aiofiles
from concurrent.futures import ThreadPoolExecutor

# Optimize garbage collection for large file operations
gc.set_threshold(700, 10, 10)

# Set environment variables for better performance
os.environ['PYTHONUNBUFFERED'] = '1'
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    datefmt='%Y-%m-%d %H:%M:%S'
)

TELEGRAM_API_ID = int(os.getenv("TELEGRAM_API_ID"))
TELEGRAM_API_HASH = os.getenv("TELEGRAM_API_HASH")
TELEGRAM_SESSION_NAME = os.path.join('session', os.getenv('TELEGRAM_SESSION_NAME', 'bot_session'))
TELEGRAM_GROUP_ID = int(os.getenv("GROUP_CHAT_ID"))

TOPIC_IDS = {
    'Doc 1': 137,
}

TOPIC_ID_TO_CATEGORY = {
    137: 'doc 1',
}

CATEGORY_TO_DIRECTORY = {
    'doc 1': '/mnt/disco1/test',
}

class FastTelegramDownloader:
    def __init__(self, client, max_concurrent_downloads=4):
        self.client = client
        self.max_concurrent_downloads = max_concurrent_downloads
        self.semaphore = asyncio.Semaphore(max_concurrent_downloads)
    
    async def download_file_fast(self, message, dest_path, chunk_size=1024*1024, progress_callback=None):
        """
        Fast download using multiple concurrent connections for large files
        """
        document = message.media.document
        file_size = document.size
        
        # For smaller files, use standard download
        if file_size < 10 * 1024 * 1024:  # Less than 10MB
            return await self._standard_download(message, dest_path, progress_callback)
        
        # Create input location for the file
        input_location = InputDocumentFileLocation(
            id=document.id,
            access_hash=document.access_hash,
            file_reference=document.file_reference,
            thumb_size=""
        )
        
        # Calculate number of chunks and their sizes
        chunks = []
        offset = 0
        chunk_id = 0
        
        while offset < file_size:
            chunk_end = min(offset + chunk_size, file_size)
            chunks.append({
                'id': chunk_id,
                'offset': offset,
                'limit': chunk_end - offset
            })
            offset = chunk_end
            chunk_id += 1
        
        logging.info(f"📦 Dividiendo archivo en {len(chunks)} chunks de ~{chunk_size//1024}KB")
        
        # Download chunks concurrently
        chunk_data = {}
        downloaded_bytes = 0
        start_time = time.time()
        
        async def download_chunk(chunk):
            async with self.semaphore:
                try:
                    result = await self.client(GetFileRequest(
                        location=input_location,
                        offset=chunk['offset'],
                        limit=chunk['limit']
                    ))
                    
                    # Update progress
                    nonlocal downloaded_bytes
                    downloaded_bytes += len(result.bytes)
                    if progress_callback:
                        progress_callback(downloaded_bytes, file_size)
                    
                    return chunk['id'], result.bytes
                except Exception as e:
                    logging.error(f"Error downloading chunk {chunk['id']}: {e}")
                    return chunk['id'], None
        
        try:
            # Execute downloads concurrently
            tasks = [download_chunk(chunk) for chunk in chunks]
            results = await asyncio.gather(*tasks, return_exceptions=True)
            
            # Collect successful chunks
            for result in results:
                if isinstance(result, tuple) and result[1] is not None:
                    chunk_id, data = result
                    chunk_data[chunk_id] = data
            
            # Verify all chunks downloaded successfully
            if len(chunk_data) != len(chunks):
                logging.warning(f"Some chunks failed, falling back to standard download")
                return await self._standard_download(message, dest_path, progress_callback)
            
            # Write file in correct order
            async with aiofiles.open(dest_path, 'wb') as f:
                for i in range(len(chunks)):
                    if i in chunk_data:
                        await f.write(chunk_data[i])
                    else:
                        raise Exception(f"Missing chunk {i}")
            
            end_time = time.time()
            duration = end_time - start_time
            speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
            logging.info(f"✅ Fast download completed: {dest_path} - Speed: {speed:.2f} MB/s")
            return dest_path
            
        except Exception as e:
            logging.error(f"Fast download failed: {e}")
            return await self._standard_download(message, dest_path, progress_callback)
    
    async def _standard_download(self, message, dest_path, progress_callback=None):
        """Fallback to standard download method"""
        document = message.media.document
        file_size = document.size
        
        # Optimize chunk size based on file size
        if file_size > 100 * 1024 * 1024:  # >100MB
            part_size_kb = 1024    # 1MB chunks
        elif file_size > 50 * 1024 * 1024:  # >50MB
            part_size_kb = 1024    # 1MB chunks
        elif file_size > 10 * 1024 * 1024:  # >10MB
            part_size_kb = 512     # 512KB chunks
        else:
            part_size_kb = 256     # 256KB chunks
        
        start_time = time.time()
        
        await self.client.download_file(
            document,
            file=dest_path,
            part_size_kb=part_size_kb,
            file_size=file_size,
            progress_callback=progress_callback
        )
        
        end_time = time.time()
        duration = end_time - start_time
        speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
        logging.info(f"📊 Standard download speed: {speed:.2f} MB/s")
        return dest_path

class MultiClientDownloader:
    def __init__(self, api_id, api_hash, session_base_name, num_clients=3):
        self.api_id = api_id
        self.api_hash = api_hash
        self.session_base_name = session_base_name
        self.num_clients = num_clients
        self.clients = []
        self.client_index = 0
        self.fast_downloaders = []
        
    async def initialize_clients(self):
        """Initialize multiple client instances"""
        for i in range(self.num_clients):
            session_name = f"{self.session_base_name}_{i}"
            client = TelegramClient(
                session_name,
                self.api_id,
                self.api_hash,
                connection_retries=3,
                auto_reconnect=True,
                timeout=300,
                request_retries=3,
                flood_sleep_threshold=60,
                system_version="4.16.30-vxCUSTOM",
                device_model="HighSpeedDownloader",
                lang_code="es",
                system_lang_code="es",
                use_ipv6=False
            )
            await client.start()
            self.clients.append(client)
            self.fast_downloaders.append(FastTelegramDownloader(client, max_concurrent_downloads=2))
            logging.info(f"✅ Cliente {i+1}/{self.num_clients} inicializado")
            
    def get_next_client(self):
        """Get next client using round-robin"""
        client = self.clients[self.client_index]
        downloader = self.fast_downloaders[self.client_index]
        self.client_index = (self.client_index + 1) % self.num_clients
        return client, downloader
    
    async def close_all_clients(self):
        """Clean shutdown of all clients"""
        for client in self.clients:
            await client.disconnect()

class TelegramDownloader:
    def __init__(self, multi_client_downloader):
        self.multi_client = multi_client_downloader
        self.downloaded_files = set()
        self.load_downloaded_files()
        self.current_download = None
        self.download_stats = {
            'total_files': 0,
            'total_bytes': 0,
            'total_time': 0
        }
    
    def _create_download_progress_logger(self, filename):
        """Progress logger with reduced frequency"""
        start_time = time.time()
        last_logged_time = start_time
        last_percent_reported = -5
    
        MIN_STEP = 10  # Report every 10%
        MIN_INTERVAL = 5  # Or every 5 seconds
    
        def progress_bar_function(done_bytes, total_bytes):
            nonlocal last_logged_time, last_percent_reported
    
            current_time = time.time()
            percent_now = int((done_bytes / total_bytes) * 100)
    
            if (percent_now - last_percent_reported < MIN_STEP and
                    current_time - last_logged_time < MIN_INTERVAL):
                return
    
            last_percent_reported = percent_now
            last_logged_time = current_time
    
            speed = done_bytes / 1024 / 1024 / (current_time - start_time or 1)
            msg = (f"⏬ {filename} | "
                f"{percent_now}% | "
                f"{speed:.1f} MB/s | "
                f"{done_bytes/1024/1024:.1f}/{total_bytes/1024/1024:.1f} MB")
            logging.info(msg)
            
        return progress_bar_function
    
    async def _process_download(self, message, metadata, filename, dest_path):
        try:
            self.current_download = filename
            logging.info(f"🚀 Iniciando descarga de: {filename}")
            
            progress_logger = self._create_download_progress_logger(filename)
            temp_path = dest_path.with_name(f"temp_{metadata['file_name_telegram']}")
            
            # Get next available client and downloader
            client, fast_downloader = self.multi_client.get_next_client()
            
            file_size = message.media.document.size
            start_time = time.time()
            
            try:
                # Try fast download first for large files
                if file_size > 20 * 1024 * 1024:  # Files larger than 20MB
                    logging.info(f"📦 Usando descarga rápida para archivo de {file_size/1024/1024:.1f}MB")
                    await fast_downloader.download_file_fast(
                        message, temp_path, progress_callback=progress_logger
                    )
                else:
                    # Use standard optimized download for smaller files
                    await fast_downloader._standard_download(
                        message, temp_path, progress_callback=progress_logger
                    )
                    
            except Exception as download_error:
                logging.warning(f"Descarga optimizada falló, usando método estándar: {download_error}")
                # Final fallback to basic download
                await client.download_file(
                    message.media.document,
                    file=temp_path,
                    part_size_kb=512,
                    file_size=file_size,
                    progress_callback=progress_logger
                )
            
            if not temp_path.exists():
                raise FileNotFoundError("No se encontró el archivo descargado")
            
            # Atomic rename
            temp_path.rename(dest_path)
            
            # Update statistics
            end_time = time.time()
            duration = end_time - start_time
            speed = (file_size / 1024 / 1024) / duration if duration > 0 else 0
            
            self.download_stats['total_files'] += 1
            self.download_stats['total_bytes'] += file_size
            self.download_stats['total_time'] += duration
            
            avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time'] if self.download_stats['total_time'] > 0 else 0
            
            logging.info(f"✅ Descarga completada: {dest_path}")
            logging.info(f"📊 Velocidad: {speed:.2f} MB/s | Promedio sesión: {avg_speed:.2f} MB/s")
            
            self.save_downloaded_file(str(message.id))
        
        except Exception as e:
            logging.error(f"❌ Error en descarga: {str(e)}", exc_info=True)
            # Cleanup on error
            for path_var in ['temp_path', 'dest_path']:
                if path_var in locals():
                    path = locals()[path_var]
                    if hasattr(path, 'exists') and path.exists():
                        try:
                            path.unlink()
                        except:
                            pass
            raise
        finally:
            self.current_download = None
    
    def load_downloaded_files(self):
        try:
            if os.path.exists('/app/data/downloaded.log'):
                with open('/app/data/downloaded.log', 'r', encoding='utf-8') as f:
                    self.downloaded_files = set(line.strip() for line in f if line.strip())
                logging.info(f"📋 Cargados {len(self.downloaded_files)} archivos ya descargados")
        except Exception as e:
            logging.error(f"Error cargando archivos descargados: {str(e)}")
            
    def save_downloaded_file(self, file_id):
        try:
            with open('/app/data/downloaded.log', 'a', encoding='utf-8') as f:
                f.write(f"{file_id}\n")
            self.downloaded_files.add(file_id)
        except Exception as e:
            logging.error(f"Error guardando archivo descargado: {str(e)}")
    
    def parse_metadata(self, caption):
        metadata = {}
        try:
            if not caption:
                logging.debug(f"📂 No hay caption")
                return None
                
            pattern = r'^(\w[\w\s]*):\s*(.*?)(?=\n\w|\Z)'
            matches = re.findall(pattern, caption, re.MULTILINE)
            
            for key, value in matches:
                key = key.strip().lower().replace(' ', '_')
                metadata[key] = value.strip()
                
            required_fields = [
                'type', 'tmdb_id', 'file_name_telegram', 
                'file_name', 'folder_name', 'season_folder'
            ]
            if not all(field in metadata for field in required_fields):
                return None
                
            if 'season' in metadata:
                metadata['season'] = int(metadata['season'])
            if 'episode' in metadata:
                metadata['episode'] = int(metadata['episode'])
                
            return metadata
            
        except Exception as e:
            logging.error(f"Error parseando metadata: {str(e)}")
            return None
    
    def get_destination_path(self, message, metadata):
        try:
            topic_id = message.reply_to.reply_to_msg_id if message.reply_to else None
            if not topic_id:
                logging.warning("No se pudo determinar el topic ID del mensaje")
                return None
                
            category = TOPIC_ID_TO_CATEGORY.get(topic_id)
            if not category:
                logging.warning(f"No se encontró categoría para el topic ID: {topic_id}")
                return None
                
            base_dir = CATEGORY_TO_DIRECTORY.get(category)
            if not base_dir:
                logging.warning(f"No hay directorio configurado para la categoría: {category}")
                return None
                
            filename = metadata.get('file_name')
            if not filename:
                logging.warning("Campo 'file_name' no encontrado en metadatos")
                return None
                
            if metadata['type'] == 'movie':
                folder_name = f"{metadata['folder_name']}"
                dest_dir = Path(base_dir) / folder_name
                return dest_dir / filename
                
            elif metadata['type'] == 'tv':
                folder_name = f"{metadata['folder_name']}"
                season_folder = metadata.get('season_folder', 'Season 01')
                dest_dir = Path(base_dir) / folder_name / season_folder
                return dest_dir / filename
                
            else:
                logging.warning(f"Tipo de contenido no soportado: {metadata['type']}")
                return None
                
        except Exception as e:
            logging.error(f"Error determinando ruta de destino: {str(e)}")
            return None
    
    async def download_file(self, message):
        try:
            await asyncio.sleep(1)  # Reduced delay
            
            if not isinstance(message.media, MessageMediaDocument):
                return
                
            if str(message.id) in self.downloaded_files:
                logging.debug(f"Archivo ya descargado (msg_id: {message.id})")
                return
                
            metadata = self.parse_metadata(message.message)
            if not metadata:
                logging.warning("No se pudieron extraer metadatos válidos")
                return
                
            if 'file_name' not in metadata or not metadata['file_name']:
                logging.warning("El campo 'file_name' es obligatorio en los metadatos")
                return
                
            dest_path = self.get_destination_path(message, metadata)
            if not dest_path:
                return
                
            dest_path.parent.mkdir(parents=True, exist_ok=True)
            
            if dest_path.exists():
                logging.info(f"Archivo ya existe: {dest_path}")
                self.save_downloaded_file(str(message.id))
                return
                
            await self._process_download(message, metadata, metadata['file_name'], dest_path)
                
        except Exception as e:
            logging.error(f"Error descargando archivo: {str(e)}", exc_info=True)
    
    async def process_topic(self, topic_id, limit=None):
        try:
            logging.info(f"📂 Procesando topic ID: {topic_id}")
            
            # Use first client for message iteration
            client = self.multi_client.clients[0]
            
            async for message in client.iter_messages(
                TELEGRAM_GROUP_ID,
                limit=limit,
                reply_to=topic_id,
                wait_time=10  # Reduced wait time
            ):
                try:
                    if message.media and isinstance(message.media, MessageMediaDocument):
                        await self.download_file(message)
                        
                        # Small delay between downloads to prevent rate limiting
                        await asyncio.sleep(0.5)
                        
                except FloodWaitError as e:
                    wait_time = e.seconds + 5
                    logging.warning(f"⚠️ Flood wait detectado. Esperando {wait_time} segundos...")
                    await asyncio.sleep(wait_time)
                    continue
                except Exception as e:
                    logging.error(f"Error procesando mensaje: {str(e)}", exc_info=True)
                    continue
                    
        except Exception as e:
            logging.error(f"Error procesando topic {topic_id}: {str(e)}", exc_info=True)
    
    async def process_all_topics(self):
        for topic_name, topic_id in TOPIC_IDS.items():
            logging.info(f"🎯 Iniciando procesamiento de: {topic_name}")
            await self.process_topic(topic_id)
            
            # Print session statistics
            if self.download_stats['total_files'] > 0:
                avg_speed = (self.download_stats['total_bytes'] / 1024 / 1024) / self.download_stats['total_time']
                logging.info(f"📊 Estadísticas del topic {topic_name}:")
                logging.info(f"   📁 Archivos: {self.download_stats['total_files']}")
                logging.info(f"   💾 Total: {self.download_stats['total_bytes']/1024/1024/1024:.2f} GB")
                logging.info(f"   ⚡ Velocidad promedio: {avg_speed:.2f} MB/s")

async def main():
    try:
        # Test cryptg availability
        test_data = os.urandom(1024)
        key = os.urandom(32)
        iv = os.urandom(32)
        
        encrypted = AES.encrypt_ige(test_data, key, iv)
        decrypted = AES.decrypt_ige(encrypted, key, iv)
        if decrypted != test_data:
            raise RuntimeError("❌ Cryptg does not work properly")
        
        logging.info("✅ cryptg available and working")
    except Exception as e:
        logging.critical(f"❌ ERROR ON CRYPTG: {str(e)}")
        raise SystemExit(1)
    
    # Ensure session directory exists
    os.makedirs('session', exist_ok=True)
    os.makedirs('/app/data', exist_ok=True)
    
    # Initialize multi-client downloader
    multi_client = MultiClientDownloader(
        TELEGRAM_API_ID,
        TELEGRAM_API_HASH,
        TELEGRAM_SESSION_NAME,
        num_clients=3  # Use 3 clients for better speed
    )
    
    try:
        logging.info("🚀 Inicializando clientes múltiples...")
        await multi_client.initialize_clients()
        
        downloader = TelegramDownloader(multi_client)
        
        logging.info("📥 Iniciando descarga de todos los topics...")
        await downloader.process_all_topics()
        
        logging.info("✅ Proceso completado exitosamente")
        
    except Exception as e:
        logging.error(f"Error en main: {str(e)}", exc_info=True)
    finally:
        logging.info("🔌 Cerrando conexiones...")
        await multi_client.close_all_clients()

if __name__ == "__main__":
    asyncio.run(main())
Reasons:
  • Blacklisted phrase (1): está
  • RegEx Blacklisted phrase (2): encontr
  • RegEx Blacklisted phrase (2): encontrado
  • Long answer (-1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: anoop

79686773

Date: 2025-07-02 03:06:26
Score: 2
Natty:
Report link

Plotly.js creates a global stylesheet that is used to show the tooltip (called in plotly.js "hover..." hoverbox, hoverlayer, hoverlabel), as well as for other features - for instance, you can see the "modelbar" (the icon menu that's by default at the top-right of the plot div) is misplaced in your shadow-dom version.

The issue is thus the fact that the global stylesheets are not applied to the shadow DOM. Based on the information from this Medium article by EisenbergEffect I applied the global stylesheets to the shadow root of your sankey-sd, using the function:

function addGlobalStylesToShadowRoot(shadowRoot) {
   const globalSheets = Array.from(document.styleSheets)
      .map(x => {
         const sheet = new CSSStyleSheet();
         const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
         sheet.replaceSync(css);
         return sheet;
      });

   shadowRoot.adoptedStyleSheets.push(
      ...globalSheets
   );
}

applied in the constructor of class SankeySD:

   class SankeySD extends HTMLElement {
      constructor() {
         super();
         this.attachShadow({ mode: 'open' });
         addGlobalStylesToShadowRoot(this.shadowRoot);
      }
      // ............... other methods
   }

and it did enable the tooltip and corrected the position of the modelbar.

Here's a stack snippet demo, based on your original code:

//from https://eisenbergeffect.medium.com/using-global-styles-in-shadow-dom-5b80e802e89d
function addGlobalStylesToShadowRoot(shadowRoot) {
   const globalSheets = Array.from(document.styleSheets)
      .map(x => {
         const sheet = new CSSStyleSheet();
         const css = Array.from(x.cssRules).map(rule => rule.cssText).join(' ');
         sheet.replaceSync(css);
         return sheet;
      });

   shadowRoot.adoptedStyleSheets.push(
      ...globalSheets
   );
}
window.addEventListener('DOMContentLoaded', () => {
   class SankeySD extends HTMLElement {
      constructor() {
         super();
         this.attachShadow({ mode: 'open' });
         addGlobalStylesToShadowRoot(this.shadowRoot);
      }
      connectedCallback() {
         const chartDiv = document.createElement('div');
         chartDiv.id = 'chart';
         chartDiv.style.width = '100%';
         chartDiv.style.height = '100%';
         chartDiv.style.minWidth = '500px';
         chartDiv.style.minHeight = '400px';
         this.shadowRoot.appendChild(chartDiv);

         const labels = ["Start", "Middle", "Begin", "End", "Final"];
         const labelIndex = new Map(labels.map((label, i) => [label, i]));
         const links = [
            { source: "Start", target: "Middle", value: 5, label: "Test" },
            { source: "Start", target: "Middle", value: 3, label: "Test2" },
            { source: "Middle", target: "Start", value: 1, label: "" },
            { source: "Start", target: "End", value: 2, label: "" },
            { source: "Begin", target: "Middle", value: 5, label: "Test" },
            { source: "Middle", target: "End", value: 3, label: "" },
            { source: "Final", target: "Final", value: 0.0001, label: "" }
         ];
         const sources = links.map(link => labelIndex.get(link.source));
         const targets = links.map(link => labelIndex.get(link.target));
         const values = links.map(link => link.value);

         const customData = links.map(link => [link.source, link.target, link.value]);

         const trace = {
            type: "sankey",
            orientation: "h",
            arrangement: "fixed",
            node: {
               label: labels,
               pad: 15,
               thickness: 20,
               line: { color: "black", width: 0.5 },
               hoverlabel: {
                  bgcolor: "white",
                  bordercolor: "darkgrey",
                  font: {
                     color: "black",
                     family: "Open Sans, Arial",
                     size: 14
                  }
               },
               hovertemplate: '%{label}<extra></extra>',
               color: ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9" ]
            },
            link: {
               source: sources,
               target: targets,
               value: values,
               arrowlen: 20,
               pad: 20,
               thickness: 20,
               line: { color: "black", width: 0.2 },
               color: sources.map(i => ["#a6cee3", "#1f78b4", "#b2df8a", "#a9b1b9", "#a9b1b9"][i]),
               customdata: customData,
               hoverlabel: {
                  bgcolor: "white",
                  bordercolor: "darkgrey",
                  font: {
                     color: "black",
                     family: "Open Sans, Arial",
                     size: 14
                  }
               },
               hovertemplate:
                  '<b>%{customdata[0]}</b> → <b>%{customdata[1]}</b><br>' +
                  'Flow Value: <b>%{customdata[2]}</b><extra></extra>'
            }
         };

         const layout = {
            font: { size: 14 },
            //margin: { t: 20, l: 10, r: 10, b: 10 },
            //hovermode: 'closest'
         };

         Plotly.newPlot(chartDiv, [trace], layout, { responsive: true, displayModeBar: true })
            .then((plot) => {
               chartDiv.on('plotly_click', function(eventData) {
                  console.log(eventData);
                  if (!eventData || !eventData.points || !eventData.points.length) return;
                  const point = eventData.points[0];
                  if (typeof point.pointIndex === "number") {
                     const nodeLabel = point.label;
                     alert("Node clicked: " + nodeLabel + "\nNode index: " + point.pointIndex);
                     console.log("Node clicked:", point);
                  } else if (typeof point.pointNumber === "number") {
                     const linkIdx = point.pointNumber;
                     const linkData = customData[linkIdx];
                     alert(
                        "Link clicked: " +
                        linkData[0] + " → " + linkData[1] +
                        "\nValue: " + linkData[2] +
                        "\nLink index: " + linkIdx
                     );
                     console.log("Link clicked:", point);
                  } else {
                     console.log("Clicked background", point);
                  }
               });
            });
      }
   }
   customElements.define('sankey-sd', SankeySD);
});
html, body {
   height: 100%;
   margin: 0;
}
sankey-sd {
   display: block;
   width: 100vw;
   height: 100vh;
}
<sankey-sd></sankey-sd>
<script src="https://cdn.plot.ly/plotly-3.0.1.min.js" charset="utf-8"></script>
<!-- also works with v 2.30.1-->

The click feature is not caused by the shadow DOM; in this fiddle that uses the same plot configuration, but without the shadow DOM, the behaviour is the same - there's always a point.pointNumber and never point.pointIndex.

I can't find the code you have used, can you please show the version that works? In any case, this might be another question, as there should not be multiple issues per post, if their solutions are unrelated.

Reasons:
  • Blacklisted phrase (1): another question
  • Blacklisted phrase (0.5): medium.com
  • RegEx Blacklisted phrase (2.5): can you please show
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • High reputation (-1):
Posted by: kikon

79686766

Date: 2025-07-02 02:53:23
Score: 2
Natty:
Report link

Font-weight rendering varies across browsers due to different font smoothing and anti-aliasing techniques.
Testing and using web-safe fonts or variable fonts can help ensure consistent appearance.

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Jack Wills

79686747

Date: 2025-07-02 02:20:16
Score: 2.5
Natty:
Report link

For anyone who is still getting the error after granting access. I tried to delete the key vault secret reference from my app service's environment variable, save and re-add it back, and it works now

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Ji Li

79686739

Date: 2025-07-02 01:59:12
Score: 3.5
Natty:
Report link

I use workaround by '\*.py/\*[!p][!y]' to 'files to exclude'. but I don't have trust that it is really answer.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Self-answer (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: bzImage

79686735

Date: 2025-07-02 01:55:11
Score: 2
Natty:
Report link

The control uses the client system date/time settings to display the date. The only way to fix this without replacing the entire control with something different is to have the client system changed to the "correct" settings.

This is very frustrating. The control offers a format, but doesn't really care what you set it to.

Reasons:
  • No code block (0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: user3498788

79686730

Date: 2025-07-02 01:40:08
Score: 1.5
Natty:
Report link

Copy pasting my code to a new query instance made it working.

enter image description here

Reasons:
  • Probably link only (1):
  • Low length (1.5):
  • No code block (0.5):
  • Single line (0.5):
  • High reputation (-2):
Posted by: FREE PALESTINE

79686727

Date: 2025-07-02 01:36:07
Score: 1
Natty:
Report link

You can set up a Power Automate flow that connects Power BI with Jira:

  1. Create a data alert or trigger in Power BI or directly in Power Automate based on your dataset (e.g., when the number of occurrences for a specific error exceeds a certain threshold within a given date range).

  2. Use Power Automate to monitor this data (either via a scheduled refresh or a Power BI data alert).

  3. Once the condition is met, the flow can automatically create a Jira ticket using the Jira connector.

  4. You can populate the Jira ticket with details from the dataset or spreadsheet (like error type, frequency, affected module, etc.).

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Carlos Diaz

79686725

Date: 2025-07-02 01:33:06
Score: 4.5
Natty:
Report link

There are couple of things you should check (btw, please share your the cloud function code snippet)

  1. Make sure that you are calling/invoking supported GCP Vertex AI Gemini modes (Gemini 2.0, Gemini 2.5 Flash/pro etc.). Models like Palm, text-bison and even earlier Gemini models (like Gemini 1.0) has been deprecated, that's mostly likely the reason you are getting 404 reason due to model deprecation. Please check the supported model doc here to use a proper Gemini model.

  2. Verify that you followed this Vertex AI getting started guide to set up your access to Gemini model. based on what you described:

    • You have GCP project

    • You enabled the Vertex AI API

    • IAM. Try to grant your GCP account Vertex AI User role permission. For detail, check Vertex AI IAM permission here.

  3. I recommend to use Google Gen AI SDK for Python to call Gemini models. It handles the endpoint and authentication, you just need to code the model to use. for example: gemini-2.5-flash

These steps should get you going. Please share code snippet so that I can share the edited snippet back.

Reasons:
  • RegEx Blacklisted phrase (2.5): please share your
  • RegEx Blacklisted phrase (2.5): Please share code
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Low reputation (0.5):
Posted by: Gang Chen

79686715

Date: 2025-07-02 01:14:02
Score: 1.5
Natty:
Report link
    <script>
     window.setInterval(function() {
     var elem = document.getElementById('fixed');
     elem.scrollTop = elem.scrollHeight;  }, 3000);
    </script>

This worked well based on @johnscoops answer

Reasons:
  • Probably link only (1):
  • Low length (0.5):
  • Has code block (-0.5):
  • Self-answer (0.5):
Posted by: Gunay Anach

79686714

Date: 2025-07-02 01:13:01
Score: 4
Natty:
Report link

I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf

You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main

I welcome feedback and discussion on this claim.

Reasons:
  • RegEx Blacklisted phrase (1): I am please
  • Probably link only (1):
  • Low length (0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Friend2093

79686711

Date: 2025-07-02 01:06:59
Score: 3
Natty:
Report link

be aware these numbers are on the training data which have arrived that node from different possibilities, rather than for your prediction results.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Jun Xu

79686698

Date: 2025-07-02 00:17:50
Score: 4
Natty:
Report link

I am pleased to share that the P vs NP question has been resolved, establishing that P = NP. The full write‑up and proof sketch are available here: https://huggingface.co/caletechnology/satisfier/blob/main/Solving_the_Boolean_k_SAT_Problem_in__Polynomial_Time.pdf

You can also review and experiment with the accompanying C implementation: https://huggingface.co/caletechnology/satisfier/tree/main

I welcome feedback and discussion on this claim.

Reasons:
  • RegEx Blacklisted phrase (1): I am please
  • Probably link only (1):
  • Low length (0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Friend2093

79686694

Date: 2025-07-02 00:11:49
Score: 0.5
Natty:
Report link

I was able to completely avoid the limitation by eliminating instances of query, and instead putting my "iter()" methods directly on the type.

pub trait Query<S: Storage> {
    type Result<'r>: 'r where S: 'r;

    fn iter(storage: &S) -> Option<impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>;
    fn iter_chunks(storage: &S, chunk_size: usize) -> Option<impl Iterator<Item = impl Iterator<Item = (Entity, Option<Self::Result<'_>>)>>>;
}
Reasons:
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: gjh33

79686693

Date: 2025-07-02 00:05:47
Score: 2.5
Natty:
Report link

The coding text input requires quotes in order to treat your input as one command, otherwise; each space is treated as a separate command on its own. Also, your output will only return the last buffer as 'line' where, it appears you were trying to set up an output variable 'output'

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Edward Pearl

79686687

Date: 2025-07-01 23:49:44
Score: 5.5
Natty:
Report link

I'm having the same problem. I've tried various nodemailer codes on the internet but still failed. It turns out after tracking the problem in my case, the API was not working properly which was caused by the use of "output: 'export'" in the next.config.js file. So if you don't use "output: 'export'" Next.js uses its full-stack capability which means Supports API routes (serverless functions in Vercel). so maybe if anyone has the same problem and has not been resolved, my suggestion is to remove "output: 'export'" in the next.config.js file. btw I use nodemailer, smtp gmail and deploy to vercel

Reasons:
  • Blacklisted phrase (1): I'm having the same problem
  • Blacklisted phrase (0.5): not working properly
  • Long answer (-0.5):
  • No code block (0.5):
  • Me too answer (2.5): I'm having the same problem
  • Single line (0.5):
  • Low reputation (1):
Posted by: Xatriya

79686682

Date: 2025-07-01 23:32:40
Score: 9
Natty: 7.5
Report link

Did you manage to resolve this? I am hitting the same issue.

Thanks!

Reasons:
  • Blacklisted phrase (0.5): Thanks
  • RegEx Blacklisted phrase (3): Did you manage to resolve this
  • RegEx Blacklisted phrase (1.5): resolve this?
  • Low length (1.5):
  • No code block (0.5):
  • Contains question mark (0.5):
  • Starts with a question (0.5): Did you
  • Low reputation (1):
Posted by: nindim

79686681

Date: 2025-07-01 23:28:39
Score: 1.5
Natty:
Report link

This is happening because at store build time window.innerWidth is undefined, and untill the resize event listener is triggered, a new value will not be set.

Reasons:
  • Low length (1):
  • Has code block (-0.5):
  • Single line (0.5):
  • Low reputation (0.5):
Posted by: Melvin Kosisochukwu

79686680

Date: 2025-07-01 23:27:38
Score: 4.5
Natty:
Report link

The moneyRemoved variable wasn't being set true. I should have debugged better. Thank you to @Rufus L though for showing me how to properly get the result from an async method without using .GetAwaiter().GetResult()!

Reasons:
  • Blacklisted phrase (0.5): Thank you
  • Low length (0.5):
  • No code block (0.5):
  • User mentioned (1): @Rufus
  • Self-answer (0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Paulo

79686676

Date: 2025-07-01 23:20:37
Score: 1
Natty:
Report link

There was a permission issue. The Groovy runtime could not get the resource because I had not opened my packages to org.apache.groovy. I just needed to add:

opens my_package to org.apache.groovy;

to module-info.java.

Reasons:
  • Low length (0.5):
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: Choosechee

79686666

Date: 2025-07-01 23:01:33
Score: 1.5
Natty:
Report link

You probably need to handle Form.WndProc and capture the windows messages about the shortcut events. This is a little more complicate but allows you to capture a lot of things in one place and has been answered here before for the usuall events of forms closing and minimising

Cancel A WinForm Minimize?

Preventing a VB.Net form from closing

There are probably message codes for those shortcuts

Reasons:
  • No code block (0.5):
  • Low reputation (1):
Posted by: Rorthron

79686655

Date: 2025-07-01 22:34:27
Score: 0.5
Natty:
Report link

it worked by using :

{
  "mcpServers": {
    "firebase": {
      "command": "firebase",
      "args": ["experimental:mcp"]
    }
  }
}
Reasons:
  • Whitelisted phrase (-1): it worked
  • Low length (1):
  • Has code block (-0.5):
  • Low reputation (1):
Posted by: Leuck Paul

79686651

Date: 2025-07-01 22:27:25
Score: 0.5
Natty:
Report link
Your file is fine, it’s just [Google STT V2 doesn’t support M4A files](https://cloud.google.com/speech-to-text/docs/encoding), even though it looks like it. It is like charging your phone with a non OEM charger that should work but it does not. Even tools like `ffprobe` say the file is ok, but Google silently skips it.

What you can do is convert your file to a different but supported file like .wav or .flac and submit it to Google STT and it should work.

This is interesting to be available natively. On Google side, there is a feature request that you can file but there is no timeline on when it can be done.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (0.5):
Posted by: marky

79686650

Date: 2025-07-01 22:26:25
Score: 2
Natty:
Report link

I believe this is only an issue on Windows. I am experiencing the same thing, but running Tensorboard on a Debian server or even through WSL works without an issue. See the associated github issue:

https://github.com/tensorflow/tensorboard/issues/6907

Reasons:
  • Low length (0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Shanzhaii

79686647

Date: 2025-07-01 22:24:24
Score: 2.5
Natty:
Report link

Here are two SO questions with good answers:

- Inline-block element height issue - found through Google
- Why does inline-block cause this div to have height? - silviagreen's comment to the question

Reasons:
  • Probably link only (1):
  • Low length (1):
  • No code block (0.5):
Posted by: Mathieu CAROFF

79686638

Date: 2025-07-01 21:57:19
Score: 1
Natty:
Report link

As per H264 specification, the H264 raw byte stream does not contain any presentation timestamp. Here is the verbiage from there, I will update more details as I find.

One of the main properties of H.264 is the complete decoupling of the transmission time, the decoding time, and the sampling or presentation time of slices and pictures. The decoding process specified in H.264 is unaware of time, and the H.264 syntax does not carry information such as the number of skipped frames (as is common in the form of the Temporal Reference in earlier video compression standards). Also, there are NAL units that affect many pictures and that are, therefore, inherently timeless. For this reason, the handling of the RTP timestamp requires some special considerations for NAL units for which the sampling or presentation time is not defined or, at transmission time, unknown.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: Trident

79686636

Date: 2025-07-01 21:56:18
Score: 0.5
Natty:
Report link

timegm() is a non-standard GNU extension. A portable version using mktime() is below. This sets the TZ environment variable to UTC, calls mktime() and restores the value of TZ. Since TZ is modified this might not be thread safe. I understand the GNU libc version of tzset() does use a mutex so should be thread safe.

See:

#include <time.h>
#include <stdlib.h>

time_t
my_timegm(struct tm *tm)
{
    time_t ret;
    char *tz;

   tz = getenv("TZ");
    setenv("TZ", "", 1);
    tzset();
    ret = mktime(tm);
    if (tz)
        setenv("TZ", tz, 1);
    else
        unsetenv("TZ");
    tzset();
    return ret;
}
Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Starts with a question (0.5): is a
  • Low reputation (1):
Posted by: iandiver

79686628

Date: 2025-07-01 21:43:15
Score: 1
Natty:
Report link

I have fixed this problem. Go to the "data" directory of MySQL. Rename the file "binlog.index" to "biglog.index_bak". and that's it. restart MySQL server, it will be reset.

Reasons:
  • Low length (1):
  • No code block (0.5):
  • Single line (0.5):
  • High reputation (-1):
Posted by: mjb

79686606

Date: 2025-07-01 21:13:08
Score: 1
Natty:
Report link

Yeah, ROPC is outdated and not recommended — no MFA, no SSO, and hard to switch IdPs later.

Use Authorization Code Flow with PKCE instead. It supports MFA/SSO and gives you refresh tokens if you request the offline_access scope.

In Keycloak, enable this by assigning the offline_access role to users (or include it in the realm’s default roles).

Then, in the /auth request, include offline_access in the scope.

When you exchange the auth code at /token, you'll get an offline_token instead of a standard refresh token.

This lets you use Keycloak’s login page, so you can enable MFA, SSO, or whatever else you need.

Much safer, future-proof, and fully standard.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Raf897