I confirm @vahagn's answer. Incase anyone is wondering, what is a smart banner, it means you just have to add a meta tag inside the <head></head> tags like below:
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="apple-itunes-app" content="app-id=1234567890, app-clip-bundle-id=com.example.myapp.clip">
<title>Your title</title>
...
</head>
<body>
...
</body>
</html>
The issue is with the file structure. I had to update the layout with sdk 52 and expo router.
I don't see exactly whats wrong with your code. How did you "create" the user? Where are you getting the logs from? Did you check if there are any users? User.all in rails console.
Here is a short tutorial that has helped me. https://dev.to/casseylottman/adding-a-field-to-your-sign-up-form-with-devise-10i1
Split trues/falses then get the id's which exist in both group.
SQL Server;
select distinct t1.ID from (select ID from thetable where VALUE='false' ) t1,
(select ID from thetable where VALUE='true') t2
where t1.ID=t2.ID
I solved the issue by running Vs code as an administrator. I hope it should help someone (:
The draw and fill can take a Shape one of which is a Arc2D.Double.
By default, bots can only access messages in chats with the bot (the bot is member of), so what happened here that the bot is not member of the chat_id in reply_parameters.
The best solution is to use Trigger = "On save" and When updating? = "Created on". After creation can take up to 4 hours to be triggered
I want to do the samething with my app but currently I just found solution in SwiftUI, React Native I still have no clue
for the source of SwiftUI, you can find it here: https://github.com/metasidd/Prototype-Siri-Screen-Animation
You can implement it like this:
Effect screenshots enter image description here
let view = UITextView()
view.attributedText = testAttributedString()
return view
func testAttributedString() -> NSAttributedString {
let test = NSMutableAttributedString()
test.append(.init(string: "How"))
test.append("are".generateImage(.init(width: 60, height: 30)))
test.append(.init(string: "you"))
return test
}
extension String {
func generateImage(_ size: CGSize,
textFont: UIFont = .systemFont(ofSize: 16),
textColor: UIColor = .white,
fillColor: UIColor = .brown) -> NSAttributedString {
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
let render = UIGraphicsImageRenderer(size: size, format: format)
let image = render.image { context in
let ellipsePath = UIBezierPath(roundedRect: CGRect(origin: .zero, size: size), cornerRadius: size.height / 2).cgPath
context.cgContext.setFillColor(fillColor.cgColor)
context.cgContext.addPath(ellipsePath)
context.cgContext.fillPath()
let attributed = NSAttributedString(string: self, attributes: [.font: textFont, .foregroundColor: textColor])
let textSize = attributed.size()
attributed.draw(at: CGPoint(x: (size.width - textSize.width) / 2, y: (size.height - textSize.height) / 2))
}
let attachment = NSTextAttachment(data: nil, ofType: nil)
attachment.image = image
attachment.bounds = .init(x: 0, y: -9.3125, width: size.width, height: size.height)
attachment.lineLayoutPadding = 5
return .init(attachment: attachment)
}
}
I agree with rizzling about that::
"useSuspenseQuery()" block rendering until the data fetching, but useQuery() handle rendering and loading data at the same time.
you know that rendering time is limited to 60 sec:: so in first hook(useSuspenseQuery) the rendering won't start until data fetched (whatever time it takes), but in second hook(useQuery)the rendering will start immediately in parallel with fetching data.
let say that fetching data takes 90sec , if u use useSuspenseQuery :: you will not face any issue because the rendering will start after 90 sec if u user useQuery :: you will face the timeout error because you reach 60th sec and no data has been fetched yet ...
you need to revise your api performance and need to use(logging,monitoring tools) to see bottlenecks
If you want to use MessageBoxA you need to open c_cpp_properties.json / tasks.json file and add link to the User32.lib.
Here is the sample code:
{
"tasks": [
{
"type": "cppbuild",
"label": "C/C++: cl.exe build active file",
"command": "cl.exe",
"args": [
"/Zi",
"/EHsc",
"/nologo",
"/Fe${fileDirname}\\${fileBasenameNoExtension}.exe",
"${file}",
"/link",
"User32.lib"
],
"options": {
"cwd": "${fileDirname}"
},
"problemMatcher": [
"$msCompile"
],
"group": {
"kind": "build",
"isDefault": true
},
"detail": "Task generated by Debugger."
}
],
"version": "2.0.0"
}
Add tabindex="-1" to your element, set this attribute to -1 can make this element unfoucsable.
So, you can add/remove this attribute dynamicly
To answer the second part of the question, if you have the non-standard 'rev' (reverse) command available, simply reverse the line, then cut from the nth column to the end, then reverse back. e.g. '... | rev | cut -d. -f 2- | rev'
So to combine this is the first part of your question you would cut the first 'n' columns before the first rev.
@Mike Macpherson Sorry for my response exactly one year later, buuuut.... Did you find any solution for your problem as I am facing the same?
I used the FFmpegMetaDataRetriever which helped me for some stream infos, maybe that could help you.
But I am still trying to find out how get more informations using the exoplayer..
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose up
Hello I found this blog helpful on validating the content of the file. However the logic to compare the signature workes for some known file type like jpeg,gud,doc,docx etc. The logic doesn't work for file types like txt,log,JSON. Is there any solution to validate the content type of txt,log,JSON files ?
My problem is that even this code is not working. The app will not change to any language with this code. If I try to run this
Ok i was able to resolve it using java 17 by changing some configurations.
I used scala 2.12.15 and updated sparkTestsVersion to 1.1.0 (this helped solve the ReflectiveOperationException)
As for the java options, I didnt find a good way of setting this in build.sbt, so I just added it as a step in git actions as following:
- name: Set JAVA_OPTS
if: ${{ inputs.JAVA_VERSION == '17' }}
run: echo "JAVA_OPTS=--add-exports=java.base/sun.nio.ch=ALL-UNNAMED" >> $GITHUB_ENV
According to Google, this is the solution
AppCompatDelegate.setApplicationLocales(LocaleListCompat.forLanguageTags(locale));
and use not the regional tag r. As in my example kq-GN
NB! This code is working from Android Tiramisu and up
My problem is that even this code is not working. The app will not change to any language with this code. If I try to run this
AppCompatDelegate.getApplicationLocales().toString());
I get this output []
I even get this result before I do the switch of language, so it seems that there is something wrong here.
MixPlayer (https://www.npmjs.com/package/mix-player) is your solution! It supports most of the common file formats (FLAC, MP3, Ogg, VOC, and WAV files) and has customizability of fade-ins, volume changing, seeking, looping, etc.
Heres an example snippet:
import { MixPlayer } from "MixPlayer";
MixPlayer.play("test_audio.mp3");
MixPlayer.onAudioEnd(() => {
console.log("Audio ended! Now what?");
});
await MixPlayer.wait();
process.exit(0);
You can construct a TikTok sharing URL for a specific piece of content. If you have a TikTok link to share, you can simply redirect the user:
https://www.tiktok.com/share/video?url=<your-content-url>
Replace with the URL of the content you want to share.
Actually you are using the wrong plugin. The right plugin for you will be "LottieFiles" here's a screenshot of the plugin. There are a ton of videos about this plugin on Youtube on how to use it. I'm sharing one here: https://www.youtube.com/watch?v=mtmYqqbpUVs
Additionally, you would want to use svg animations on the web rather than GIFs because they are vector graphic animations with 2 benefits. Tiny in size and scalable without getting pixelated. GIFs are almost outdated and obsolete for web in my opinion.
I encountered the same issue, and for me, it was related to how I structured the logic in my React component. Specifically, I had the Google login initialization, One Tap prompt display, and login button rendering all inside a single useEffect hook. Once I split the logic into separate useEffect hooks for each part, the One Tap modal started dismissing as expected—without needing any manual intervention.
Interestingly, I found that the issue of the modal not dismissing was only present in Chrome. The modal dismissed correctly in other browsers, but not in Chrome. Splitting the logic into separate hooks resolved the issue in Chrome as well.
2025:
For Google chrome, checkout the step here from the docs: https://developer.chrome.com/docs/devtools/overrides
Overriding the header content of the network resource does it for me.
0
I have resolve this issue by adding TestNG library from configure build path and updating testNG plugin
I'm using OpenSSL 3.3.1 4 Jun 2024 (Library: OpenSSL 3.3.1 4 Jun 2024), on Ubuntu 24.10.
I'm having similar issues, but here's two notes: 1) you are not specifying a
signer_digest, either in the config file or via a-digestcommand-line option; 2) we can't see your certificate information in order to assess whether they are well-formed.
And that was the comment I was about to post, when I tried a few more things and it started working.
Starting from the end, here's my config file, named x509.cnf:
[ server ]
basicConstraints = CA:FALSE
extendedKeyUsage = critical, timeStamping
[ tsa ]
default_tsa = tsa_config
[ tsa_config ]
dir = .
serial = $dir/serial
crypto_device = builtin
signer_cert = $dir/ca-int.crt
signer_digest = SHA256
signer_key = $dir/ca-int.key
default_policy = 1.2.3.4.1
digests = sha256
accuracy = secs:1, millisecs:500, microsecs:100
ordering = yes
tsa_name = yes
Two things are immediately apparent:
default_policy expects the actual value, and not a section name. I got this one from the error message:4027392CF87A0000:error:17800087:time stamp routines:ts_CONF_invalid:var bad value:../crypto/ts/ts_conf.c:120:tsa_config::default_policy
40473E889B7C0000:error:17800088:time stamp routines:ts_CONF_lookup_fail:cannot find config variable:../crypto/ts/ts_conf.c:115:tsa_config::signer_digest
so I added the line:
signer_digest = SHA256
Documentation states this is not optional, although it's non-existent as to actual values. Yeah, openssl docs, right? Thank God the product is actually great.
Here's my steps:
LEN=${LEN:-2048}
# create a root.
openssl req -new -x509 -noenc -out ca.crt -keyout ca.key -set_serial 1 -subj /CN=CA_ROOT -newkey rsa:$LEN -sha512 || exit 1
# create TSA CSR
openssl req -new -noenc -config x509.cnf -reqexts server -out tsa.csr -keyout tsa.key -subj /CN=TSA -newkey rsa:$LEN -sha512 || exit 1
# Sign the TSA with `ca.crt`
openssl x509 -req -in tsa.csr -CAkey ca.key -CA ca.crt -days 20 -set_serial 10 -sha512 -out tsa.crt -copy_extensions copy || exit 1
As you can see, the ROOT is generated completely without a configuration and the TSA is then signed by the ROOT. The crucial point here is this line in your config:
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
which is precisely why you get something like:
4097C0FB27790000:error:17800075:time stamp routines:TS_RESP_CTX_set_signer_cert:invalid signer certificate purpose:../crypto/ts/ts_rsp_sign.c:142:
The only key usage of this certificate must be the timeStamping, which, not being among the standard key usages, must be fed via an extended key usage extension. If this is as self-evident to you as it was to me, welcome to RFC HELL! By now, I know by heart larger swaths of RFC5280 than it's mentally healthy, and I still feel quite the ignorant.
So, remove the keyUsage line from your cnf and it should fly.
Just run:
openssl ts -reply -config x509.cnf -queryfile request.tsq
and admire the gibberish on your screen. Or add the -out response.tsr and save it for later.
For me, issue was with Ad Blocker Browser Plugin, so I just turned off the plugin and issue resolved. :)
I have a similar issue. When our AWS build pipelines run cdk synth, the process downloads the public.ecr.aws/sam/build-python3.10 image and then runs the following command which now pulls in v2.0.0 of poetry which no longer has the required export option:
[2/2] RUN python -m venv /usr/app/venv && mkdir /tmp/pip-cache && chmod -R 777 /tmp/pip-cache && pip install --upgrade pip && mkdir /tmp/poetry-cache && chmod -R 777 /tmp/poetry-cache && pip install pipenv==2022.4.8 poetry && rm -rf /tmp/pip-cache/* /tmp/poetry-cache/*
After details analysis we are observing that at the time of issue in varnish, Server processes got increased and varnish giving incomplete request and returned to 504 to google load balancer. iam sharing below Google LB error and SAR command output --
{ "insertId": "1l6m956f37u7rz", "jsonPayload": { "@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry", "backendTargetProjectNumber": "projects/488", "remoteIp": "106.197.5.134", "statusDetails": "client_disconnected_before_any_response", "cacheDecision": [ "CACHE_MODE_USE_ORIGIN_HEADERS" ] }, "httpRequest": { "requestMethod": "POST", "requestUrl": "https://abc/iry", "requestSize": "364", "userAgent": "Mozilla/5.0 (iPad; CPU OS 15_5 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/125.0.6422.80 Mobile/15E148 Safari/604.1", "remoteIp": "106.197.5.134", "referer": "https://xyz/efg/w34pro-smartwatch-23944871197.html?pos=2&kwd=smart%20watch&tags=A|PL|||8752.144|Price|product|||LSlc|rsf:pl-|-res:RC4|ktp:N0|stype:attr=1|mtp:G|grpfl:45|wc:2|qr_nm:gd|com-cf:nl|ptrs:na|mc:184363|cat:248|qry_typ:P|lang:en|flavl:10|cs:9555", "latency": "0.024914s" }, "resource": { "type": "http_load_balancer", "labels": { "zone": "global", "forwarding_rule_name": "-logical-seperation-443lb", "target_proxy_name": "logical-speration-lb-target-proxy-2", "backend_service_name": "varnish-group3", "url_map_name": "logical-speration-lb", "project_id": "abc" } }, "timestamp": "2025-01-08T03:32:37.468854Z", "severity": "INFO", "logName": "projects/987/logs/requests",
} ------------output of SAR command --when the process increases from 1500 to 3k and 4k issue error started, coming--
03:00:07 IST 4 1543 1.42 1.52 1.55 1 03:01:07 IST 2 1547 1.31 1.48 1.53 1
03:01:07 IST runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 03:02:06 IST 4 2044 1.65 1.55 1.55 0 03:03:06 IST 1 4079 1.38 1.49 1.53 0 03:04:06 IST 1 4224 1.67 1.54 1.55 0 03:05:06 IST 2 4228 1.69 1.58 1.56 1 03:06:06 IST 1 4223 1.43 1.53 1.54 2 03:07:06 IST 1 4208 1.60 1.57 1.56 0 03:08:06 IST 1 4196 1.54 1.54 1.55 0 03:09:06 IST 1 4063 1.66 1.58 1.56 0 03:10:06 IST 1 3822 1.58 1.58 1.56 0 03:11:06 IST 1 3592 1.56 1.55 1.55 0 03:12:06 IST 2 3349 1.24 1.46 1.52 0 03:13:06 IST 1 3098 1.29 1.44 1.50 0 03:14:06 IST 1 2863 1.41 1.46 1.51 0 03:15:06 IST 1 2618 1.36 1.43 1.50 0 03:16:06 IST 1 2391 1.85 1.57 1.54 0 03:17:06 IST 2 2147 1.52 1.53 1.53 0
==Sharing log below == 20241218211542 - - - 0 - Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.6778.108 Mobile Safari/537.36 (compatible; Googlebot/2.1; )
FROM signal si LEFT JOIN block b ON prov.block_id= b.id AND si.signal_id = b.id AND si.type = 'BLOCK'
On this level your prov.block_id is not visible. You add it later
GitHub reusable workflows inputs and secrets are defined and passed separately and this is why we can't pass secrets as a build arguments values.
However, can workaround this in the following way
ARG_ONE.build_args: |
ARG_ONE=${ARG_ONE}
ARG_TWO=ARG_TWO_plain_text
${ARG_ONE} wit the value of secret ARG_ONE.build_args variable with the substituted value and pass it as a multiline.build_args as usual to docker/build-push-action action.Substituted variable value in build_args will be masked as a regular secret.
name: Docker
on:
workflow_dispatch:
jobs:
build-and-push:
name: Build and Push
uses: org/repo/.github/workflows/docker-reusable.yml@main
with:
docker_file: docker/Dockerfile
build_args: |
ARG_ONE=${ARG_ONE}
ARG_TWO=ARG_TWO_plain_text
secrets: inherit
name: Docker reusable workflow
on:
workflow_call:
inputs:
docker_file:
default: Dockerfile
description: Dockerfile
required: false
type: string
build_args:
default: ''
description: Build arguments
required: false
type: string
env:
DOCKER_FILE: ${{ inputs.docker_file }}
BUILD_ARGS: ${{ inputs.build_args }}
jobs:
build:
name: Build and push
runs-on: ubuntu-latest
steps:
- name: Secrets to variables
if: ${{ env.BUILD_ARGS != '' }}
uses: oNaiPs/[email protected]
with:
secrets: ${{ toJSON(secrets) }}
exclude: DOCKERHUB*
- name: Substitute build args
if: ${{ env.BUILD_ARGS != '' }}
run: |
{
echo 'BUILD_ARGS<<EOF'
echo "${{ env.BUILD_ARGS }}"
echo EOF
} >> "$GITHUB_ENV"
- name: Build and Push by digest
id: build
uses: docker/build-push-action@v6
with:
context: .
file: ${{ env.DOCKER_FILE }}
platforms: linux/amd64,linux/arm64
push: true
build-args: |
${{ env.BUILD_ARGS }}
labels: ${{ steps.meta.outputs.labels }}
This partial example is based on Build and load multi-platform images from Examples.
We added two optional steps, which will be executed only when build_args input is passed and we use oNaiPs/secrets-to-env-action to expose secrets as variables.
secrets: inherit.In addition to OP's answer, ensure that your PATH environment variable includes %HADOOP_HOME\bin (for Windows) - else downloading the correct winutils version won't work.
Ok after talking to @pskink comment I reached the solution
void initState() {
super.initState();
// other code
SchedulerBinding.instance.addPostFrameCallback((timeStamp) {
context.findRenderObject()?.visitChildren(_visitor);
});
}
void _visitor(RenderObject child) {
if (child is RenderEditable) {
setState(() {
// assign RenderEditable node to widget state
// make sure you get the correct child, for me there is only one textfield for testing
reEdt = child;
});
return;
}
child.visitChildren(_visitor);
}
// call when inserting text and want to scroll to cursor
void scrollToSelection(TextSelection selection) {
// find local rect of cursor or starting selection in case of selecting text
final localRect = reEdt?.getLocalRectForCaret(TextPosition(offset: selection.baseOffset));
if (localRect == null) return;
scrollController.jumpTo(localRect.top);
}
and don't forget to assign scrollController to TextField
For me this part would not be correct:
output_shape = ((A.shape[0] - kernel_size) // stride + 1,
(A.shape[1] - kernel_size) // stride + 1)
Incase A.shape = [5, 5], kerne_size_ = 3, stride = 2 it would give output_shape = 2 but the result should be output_shape = 3. In my opinion the correct expression should be:
output_shape = ceil(((A.shape[0] - kernel_size) / stride + 1,
(A.shape[1] - kernel_size) / stride + 1)
Regards.
yes, this works for stocks, how can I get data for NIFTY AND BANKNIFTY for all columns and send excel output to output folder?
What should I write stock on fno
api_req=req.get('https://www.nseindia.com/api/quote-derivative?symbol=NIFTY',headers = headers).json()
for item in api_req['**stocks**']:
data.append([
item['metadata']['instrumentType'],
item['metadata']['openPrice']])
I had a similar problem updating a Node.js project to use moduleResolution of node16.
Removing an old paths section that explicitly forced the TS compiler to look in a specific location was the solution.
"paths": {
"*": [
"node_modules/*"
]
}
People are usually mixing up two things. [C# 3.0 & .NET Framework 3.5]
as people describe it previously and provide tables for each c# language version with its compatible visual studio and .net framework.. ::C# 3.0 was the version of the programming language released in 2007, and it came with awesome features like LINQ(to treat with data)
At the same time, there was an update called .NET Framework 3.5 which added tools for Developers.
the confusion because of the .NET Framework 3.5 was released around the same time (after c# 3.0 ) so Some people Mistakly think of it is an C# 3.5, but actually c#3.5 doesn't exist.
It’s just C# 3.0 with extra tools from .NET Framework 3.5.
when you use the {objectname} syntax on a library it means that you are calling a specific module of that library in this case 'object', example:
example.js function:
export const hello = () => {
console.log("hello");
};
export const hello_world = () => {
console.log("hello world");
};
use example.js
import { hello } from "./example.js";
hello();
in this case we can use only function hello()
to be able to use hello_world() as well
import { hello,hello_world } from "./example.js";
hello();
hello_wolrd();
if you need more clarification or something is not clear do not hesitate to ask
add the Material widget in the parent of the Column
I know its been 12 years but Email client are some of the properties for security purpose and position is one of them. So it will work fine in your local when you see you HTML file, not when you upload it in your signature. If you inspect that into your email, you can see that property completely removed from there.
As mentioned by @Shiv_Kumar_Ganesh one of a solution is using background image instead, but you will find one more issue there that while forwarding the email if you remove any of the existing content from the email you will find your background image missing from your HTML.
If anyone knows a solution for my problem kindly revert back quickly.
if we assume address 0x100 is start of program and from 0x100 to 0x103 is instruction mov ax,5
so in memory 5 as 16-bit register is stored in 2 bytes 0x100 and 0x101 it should be stored as 00 in 0x100 and 05 in 0x101 but in little endian it is stored as 05 in 0x100 and 00 in 0x101 the reason that the least significant bit is stored in lowest address i.e. 05 in 0x100 and 00 in 0x100
to understand in detail check my medium blog about this https://medium.com/@farhanalam0407/high-big-endian-and-low-small-endian-a365a724dd0c
I was having the same issue when using Next.js 15 with turbopack, disabling turbopack fixed it for me.
Maybe some or all workers do not work with turbopack (i am not sure)
package.json
{
"scripts": {
"dev": "next dev" // remove --turbopack option
}
}
Moving the answer from the question to make it better aligned with the SO format:
It is a bug and I'm not the only one experiencing this: https://github.com/dotnet/runtime/issues/109885
How to get both id_token and access_token?
i need the access token to let the user login and the id_token to get user infos
thanks!
How to sort a list of strings based on their lengths in dart.
List<String> words = ["mass","as","hero","superhero"];
words.sort((w1, w2) => w1.length.compareTo(w2.length));
print(words);
output:
["as", "mass", 'hero", "superhero"]
Bit late to this question but for those with PHP < 7.4:
If you provide unique keys for each of the initial separate array elements, then array addition works.
Example:
const A = ["a", "aa", "aaa"];
const B = [10 => "b", 11 => "bb"];
const C = [20 => "c"];
const D = A + B + C;
Thanks for all the answers, this community is very helpful
This might save someone's day,
The error in our case was that the permissions for SSRS (SQL Server Reporting Service) were not enough, and I changed my Application Pool for IIS it was configured to ApplicationPoolIdentity. I changed my Application Pool to LocalSystem and it fixed it.
Also adding low-level try-catch helped me to identify the real error, as this error usually is not accurate and there's underlying error
Without adding additional Packages you can directly use :
from google.colab.output import eval_js
print(eval_js("google.colab.kernel.proxyPort(5000)"))
this will provide you with the link that can be accessed remotely on your laptop
I had a problem with duplicating a line WidgetsFlutterBinding.ensureInitialized(); in my main file
It has become more simpler:
async function dostuff() {const [err, res] ?= await fetch('https://codingbeautydev.com');}
Uncaught runtime errors: cannot read properties of null (reading 'useRef') typeerror :Cannot read properties of null (reading 'useRef') exports.useRef (http://localhost:3000/ static /js/bundel.js:34520:31) at BrowserRouter ih this case try o istall the npm i react-router-dom@version
Use the scientisttools python package
I removed the google_fonts from my pubspec.yaml and it worked.
did you found any solution? i happen to be stuck on the same issue.
For this case either we can use logic app or power automate both are somehow similar services.
Hope for the best.
Is the same process for external entra ID, WHen I open the licences page I can see only "This feature is unavailable or doesn't apply to the current tenant configuration" like should I have any premium subscription or the page itself not available for external entra id?
@Slaine06 how can i handle receive the voip notification in the dart code
According to the docs it doesn't matter which one you use, they're the same. https://swr.vercel.app/docs/mutation
If you're working locally, You can try a chrome extension called Allow CORS, which can solve your issue.
The samba project (samba.org) offers a compatible implementation of the active directory network protocols that Windows clients happily use as an AD server.
Not sur if it helps and serves your needs, but we have created a samba-container project ttps://github.com/samba-in-kubernetes/samba-container/ that also features an active directory server container.
pre-built images are available here: https://quay.io/repository/samba.org/samba-ad-server
sure, this is not native Windows AD, but should be compatible enough for most purposes.
Possible reasons for the discrepancy:
Returning Users: Many downloads could be from users who previously purchased the app and are simply reinstalling it. Check the user acquisition reports in Google Play Console to differentiate between new and returning users.
Google Ads Attribution: Ensure conversion tracking for purchases is correctly set up in your Google Ads campaign. Ad clicks might not always result in purchases.
Refunds or Cancellations: Check the Order Management section in Google Play Console for refunded or canceled transactions.
Technical Issues: Test the purchase flow in your app to ensure it’s functioning correctly. Use logs to identify potential errors.
Delayed Reporting: Purchases may take time to appear, depending on payment methods or regional delays.
Fraudulent Installs: Investigate unusual install patterns. Some downloads might not represent genuine user activity.
Action Steps:
We spent two days investigating with our DevOps team and eventually found out what was causing it. This breaking change https://learn.microsoft.com/en-us/dotnet/core/compatibility/containers/8.0/aspnet-port
the solution is to use
skinparam maxmessagesize 180
it affects arrow labels in state diagrams also
The "alternative" meta tag in the HTML version, which points to the plain version of the page is:
This tag is automatically detected by Lynx.
For me, ssh-add was running the "wrong" command.
On my windows system, there were 2 ssh-add programs - Git's one, and the OpenSSH one that is included with Windows.
Git's one requires the ssh-agent to be started manually with the command line. The OpenSSH one uses the Windows service "OpenSSH Authentication Agent".
For me, this guide https://blog.devgenius.io/how-to-add-private-ssh-key-permanently-in-windows-c9647ebfca3e got me nearly where I needed to be, but that was only part of the puzzle - the missing piece was understanding that I actually had TWO ssh agents installed, and I needed to ensure I was trying to connect to the correct one:
Type where ssh-add to confirm which ssh-add will be invoked when you run the command:
c:\projects\keypay-dev\Basics\Payroll\Payroll>where ssh-add
C:\Windows\System32\OpenSSH\ssh-add.exe
c:\program files\Git\usr\bin\ssh-add.exe
This is how it should look if you want to use the Windows one, and thus benefit from the Windows service and not have to start it from the command line every session.
If the one in the Git folder is above, as it was for me before I corrected it, move C:\Windows\System32\OpenSSH to higher than c:\program files\Git\usr\bin in your PATH variable.
Hook can be found here https://developer.wordpress.org/reference/hooks/rest_dispatch_request/
function wpse_authenticate_page_route( $dispatch_result, $request, $route, $handler ) {
if ( strpos( $route, '/wp/v2/pages' ) !== false ) {
return new \WP_Error(
'rest_auth_required',
'Authentication required',
array( 'status' => 401 )
);
}
return $dispatch_result;
}
add_filter( 'rest_dispatch_request', 'wpse_authenticate_page_route', 10, 4 );
You may want to check that blocking this route doesn't cause problems to Wordpress, they do say that blocking the API can break it. https://developer.wordpress.org/rest-api/frequently-asked-questions/#can-i-disable-the-rest-api
Add TrustServerCertificate=True; to your ConnectionString
Initially, MAS seemed like an experimental approach to AI, mainly due to the complexity of their coordination and the challenges of managing multiple agents. However, as the AI field has advanced, the real-world applications of MAS have proven to be transformative. From healthcare and autonomous vehicles to logistics, gaming, and disaster response, MAS is already solving complex problems that traditional, single-agent systems simply couldn't tackle. The benefits of MAS go beyond theoretical advantages—they are actively changing industries, driving innovation, and enhancing efficiency. With the continuous improvement in AI, communication protocols, and decentralized computing, the potential of MAS will only increase. Additionally, future integration with technologies like blockchain and edge computing will make these systems even more robust, secure, and capable of real-time decision-making. So we can say that multi-agent systems are far from being just hype. They represent a significant leap forward in AI development, with proven practical applications across industries. As AI continues to evolve, MAS will play an increasingly crucial role in shaping the future of technology and problem-solving. For more information you may find this interesting as well: https://sdh.global/blog/ai-ml/multi-agent-systems-the-future-of-collaborative-ai/#:~:text=The%20benefits%20are%20already%20being%20used%20across%20different%20industries.
Still asking myself why @ChrisF deleted my previous answer where I added the YouTube video URL to give credit to the person who wrote the code and shared it on YouTube. I will write the scripts I found that helped me solve the same problem.
// inject.js
console.clear = () => console.log('Console was cleared');
const i = setInterval(() => {
if (window.turnstile) {
clearInterval(i);
window.turnstile.render = (a, b) => {
let params = {
sitekey: b.sitekey,
pageurl: window.location.href,
data: b.cData,
pagedata: b.chlPageData,
action: b.action,
userAgent: navigator.userAgent,
json: 1,
};
// we will intercept the message
console.log('intercepted-params:' + JSON.stringify(params));
window.cfCallback = b.callback;
return;
};
}
},5);
// index.js
const { chromium } = require('playwright');
const { Solver } = require('@2captcha/captcha-solver');
const solver = new Solver('Your twocaptcha API key');
const proxyServer = 'Proxy server'; // Proxy server manager
const proxyUser = 'Proxy user';
const prpxyPassword = 'Proxy Password';
const example = async () => {
const browser = await chromium.launch({
headless: false,
devtools: false,
proxy: { "server": proxyServer, "username": proxyUser, "password": prpxyPassword },
});
const context = await browser.newContext({ ignoreHTTPSErrors: true });
const page = await context.newPage();
await page.addInitScript({ path: './inject.js' });
page.on('console', async (msg) => {
const txt = msg.text();
if (txt.includes('intercepted-params:')) {
const params = JSON.parse(txt.replace('intercepted-params:', ''));
console.log(params);
try {
console.log(`Solving the captcha...`);
const res = await solver.cloudflareTurnstile(params);
console.log(`Solved the captcha ${res.id}`);
console.log(res);
await page.evaluate((token) => {
cfCallback(token);
}, res.data);
} catch (e) {
console.log(e.err);
return process.exit();
}
} else {
return;
}
});
await page.goto('site url');
await page.waitForTimeout(5000);
await page.reload({ waitUntil: "networkidle" });
console.log('Reloaded');
};
example();
Please 1/ add a dead-letter queue to your target and set RetryPolicy to 0 so that failed attempts are immediately sent to the DLQ for further inspection. Messages sent to a DLQ have metadata attributes explaining any issues/errors.
My fix was instead of importing the error from mongodb:
import { MongoServerError } from 'mongodb';
To import it through mongoose:
import mongoose from 'mongoose';
if (error instanceof mongoose.mongo.MongoServerError) {
...
}
Thank you for that guys, resolved my issue straight away.
There are different methods.
Accountinfo of bonding curve account. Decode it, then get data virtual sol reserves and token reserves. Use them to calculate price. Every pumpfun token has same total supply i.e 1 billion. Price * supply = market cap
Get it from last transaction. Check swapped the sol and mint amount. Sol amount / Mint amount = price. And the same way as 1, you multiply to get marketcap.
root = tk.Tk()
root.title("My app")
root.update_idletasks()
screen_width = root.winfo_screenwidth() - (root.winfo_rootx() - root.winfo_x())
screen_height = root.winfo_screenheight() - (root.winfo_rooty() - root.winfo_y())
Hi i encountered the same issue, all i did was to delete the poetry.lock file and ran peotry install
You should remove the brackets arround the custom rule
'location' => [
'required',
'string',
'min:3',
'max:100',
new LocationIsValidRule(),
],
See https://laravel.com/docs/11.x/validation#custom-validation-rules
In my case, I load h5 file from dataloader. I think it may caused by mutilprocess loading at backend. While, we can set env to avoid file locking:
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
or
export HDF5_USE_FILE_LOCKING=FALSE
reference issue comment
I am trying to read a FITS file containing ROSAT data from the website (https://python4astronomers.github.io/astropy/tables.html).
Under Practical Exercises the first exercise statement: Try and find a way to make a table of the ROSAT point source catalog that contains only the RA, Dec, and count rate. Hint: you can see what methods are available on an object by typing e.g. t. and then pressing . You can also find help on a method by typing e.g. t.add_column?.
But my code: (my_env) C:\Users\labus\Documents\Curtin\Python\pyproj>ipython --matplotlib Python 3.13.1 (tags/v3.13.1:0671451, Dec 3 2024, 19:06:28) [MSC v.1942 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 8.30.0 -- An enhanced Interactive Python. Type '?' for help. Using matplotlib backend: tkagg
In [1]: import matplotlib.pyplot as plt ...: import numpy as np ...: import astropy ...: import tarfile ...: from urllib import request ...: from astropy.table import Table ...: from astropy.io import ascii ...:
In [2]: from astropy.table import Table, Column
In [3]: f = open('ROSAT.fits', 'r')
UnicodeDecodeError Traceback (most recent call last) Cell In[4], line 1 ----> 1 f.read()
File c:\users\labus\documents\curtin\python\pyver\python313\Lib\encodings\cp1252.py:23, in IncrementalDecoder.decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 18179: character maps to
In [5]: Is giving the above UnicodeDecodeError
Could someone please provide some guidance and maybe an answer as to why this problem is occurring?
Any assistance is greatly appreciated. Thank you - Cobus Labuschagne
Since i don't have enought rep to comment Im writing an answer.
// when exporting
module.exports = myDateClass
// when importing
const Date = require('./myDateClass')
const date = new Date()
I also faced print issues when running flutter app in iOS. There is a way that you can get all the log of the device using Xcode.
you should open like, Xcode -> window -> Devices and simulators -> select your device you're running -> Open console
this will open a window that will show whole log of the device and you can filter it with any keyword, even you can check keyword like contains, not-contain, equal and no-equal etc.
Redis Insight interprets all inserted characters as part of the query, including optional arguments such as INKEYS or LIMIT. To use optional arguments, you can try Workbench. It also offers syntax auto-completion for Redis Query Engine.
In Source Control you can "Show Stashes" from the menu in the REPOSITORIES section. Check the screenshot below:
Even though Google might say they will update the model on 2020-01-01 00:00:00 flat, the full rollout takes up to a week. During that time, you can get differing OCR results from run to run.
Also they sometimes change the model without notice.
Source: this is an issue I have been dealing with for years when using GOCR.
Short answer: yes
The use of java record is mentioned in the documentation about object mapping
Object creation
Spring Data automatically tries to detect a persistent entity’s constructor to be used to materialize objects of that type. The resolution algorithm works as follows:
[...]
4. If the type is a Java Record the canonical constructor is used.
[...]
Using annotations with records is not explicitly mentioned, but I was able to use annotation without any issue (spring data cassandra 4.2.x).
As it turned out, vcpkg installs libmupdf without taking care of dependencies, a pull request should have fixed the issue but wasn't merged. Linking must be done manually(find_library(...)) for now.
In the event that someone else is stumbling upon this question while having the same issue : you need to specify the parameter "time_format", as mentionned in the documentation here :
https://docs.splunk.com/Documentation/Splunk/9.4.0/RESTREF/RESTsearch#search.2Fjobs.2Fexport
It defaults to %FT%T.%Q%:z.
In your case, if you are looking for an ISO formatting, you need to specify %Y-%m-%dT%H:%M:%S.%Q%:z
The documentation about the various time formats used by Splunk is available here : https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Commontimeformatvariables
Note that this also applies to Splunk Python SDK, where you need to pass the "time_format" field as a kwargs
A workaround to fix this without changing anything else is to insert the following in your WebClient project file:
<PropertyGroup>
<_ExtraTrimmerArgs>--keep-metadata parametername</_ExtraTrimmerArgs>
</PropertyGroup>
The real issue should be fixed in https://github.com/dotnet/runtime/issues/81979
Below is the answer for removing or replacing the prefix in the dbo.
ALTER SCHEMA dbo TRANSFER [PrefixName].[salesorder]
@Bean
public ServletServerContainerFactoryBean createWebSocketContainer() {
ServletServerContainerFactoryBean container = new ServletServerContainerFactoryBean();
// 设置最大文本消息缓冲区大小
container.setMaxTextMessageBufferSize(512 * 1024);
// 设置最大二进制消息缓冲区大小
container.setMaxBinaryMessageBufferSize(512 * 1024);
// 设置异步发送超时时间
container.setAsyncSendTimeout(20000L);
// 设置会话空闲超时时间(可选)
container.setMaxSessionIdleTimeout(300000L);
return container;
}
I ran into the same problem today and managed to fix it.
I had this code in my component:
const { join } = useMeeting({ ... ... });
useEffect(() => {
join();
}, []);
It seems the problem is related to join so I add a timeout on it.
useEffect(() => {
setTimeout(() => {
join();
}, 500);
}, []);
This is a bad solution but it did save my day.
char s[15];
float val1 = 3.1415926535;
_gcvt(val1, 10, s);
write("s: %s", s);
there is package for Android app written with Flutter but not sure for iOS. Here is it: https://pub.dev/packages/flutter_background_video_recorde
This one works perfectly for me, producing one filename per line listing, short format
ls -p . | grep -v '/'
Disagree with all the previous answers:
I would recommend manually performing all necessary validation checks before executing the persistence or database operation. For example, verify whether related entities exist or any foreign key dependencies are present. If a violation is detected, you can throw an IllegalStateException or a custom exception that clearly indicates the issue. This approach ensures that your business logic is handled explicitly in your service layer rather than relying on database constraints to handle errors.
Refference: https://stackoverflow.com/a/77125211/16993210
0.2.50 provides the "Adj. Close" column.
!pip install yfinance==0.2.50 import yfinance as yf
df = yf.download('nvda')
df.columns
MultiIndex([('Adj Close', 'NVDA'), ( 'Close', 'NVDA'), ( 'High', 'NVDA'), ( 'Low', 'NVDA'), ( 'Open', 'NVDA'), ( 'Volume', 'NVDA')], names=['Price', 'Ticker'])
How to make a correct field request in VB? All my methods of recording fields end in error...
Absolute Path:: gives you a clear direction from the beginning to the exact spot regardless where you are now (Full Address in the filesystem)
Relative Path:: it tells you how to get somewhere based on where you're already standing(your Current Working Directory is your Start point)
Well, turns out my phone being Android 10 was the issue. In accordance with Android API docs you need extra additional permissions.
https://developer.android.com/develop/connectivity/bluetooth/bt-permissions