Removing node_modules and package-lock.json, and running npm install did it for me. The state of the local files was out of sync after an outdated git pull
can u explain your code just i want to learn
Did you fix this, I have having the same issue and really confused as the user can see the files in the web interface just not in the;
_api/web/recycleBin
you can drop just by writing this where df is your DataFrame and in index write the index of the row you want to drop df.drop(index=0)
You can also use this ansible oneliner command to generate the template without adding code in your playbooks or role file:
ansible -m template -a "src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini" localhost
Yes, you can implement a secure over-the-air exchange of the certificate pins.
Your app can download a signed list of pins from an unpinned domain. The signature of the data payload ensures the data's authenticity and can be validated on the client side using a built-in public key (baked into the mobile app).
You need to establish a process to keep the list of pins up-to-date when a new certificate is issued so that the app always downloads the correct pins. Also, as a pro tip, do not forget to add a challenge to the request from the mobile client to prevent replaying the response (the response should be a signed list of pins together with the challenge).
Here is an example tutorial on how to implement dynamic SSL pinning on iOS and Android:
https://developers.wultra.com/tutorials/posts/Dynamic-SSL-Pinning-for-Mobile-Apps/
It seems like the issue was with the shebang and line endings. Change from CRLF to LF resolved the issue.
DVC is saving files in the content-addressable way to be able to find the previous version of a file, directory. If you push files directly, the question is how do you find the previous state of the directory (e.g. if you need to remove some files?).
If you'd like to have files in a human-readable format, I would recommend a bit different setup / workflow.
Consider using DataChain + DVC as shown here, for example: https://github.com/shcheklein/example-datachain-dvc (+ DataChain gives a way to manage and query data granularly).
The difference for you in this case is that you push, upload, modify files directly in the cloud. DataChain captures a snapshot of a state of the bucket(s) when you need to produce a new version of a dataset and saves that snapshot into DVC.
So, instead of copying data into DVC, you are essentially saving a file with references to the original data. It's a different way of doing versioning if you wish where both tools work nicely together.
Just got this problem implementing the Autofill component using a laptop with touchpad.
Based on Leandro's debugging + my own, I'm guessing the problem lays in how touchpad clicks are registered:
Seems like there is something wrong with the listeners in the code (will confirm tomorrow when I get the time to look into it).
I was really struggling with the installation of Python 3.8 and pip on my Ubuntu 16.04 system. I tried several methods, but I kept running into the issue with the platform.linux_distribution error, and none of the solutions seemed to work. I tried switching Python versions and adjusting paths, but it only made things worse.
Then I came across a guide that helped me solve the issue quickly: this detailed guide gave me a much clearer approach compared to the others I had found. The steps were straightforward and worked flawlessly, resolving my pip installation issues in no time.
I highly recommend checking it out if you're facing similar struggles—this guide was way more helpful than the others!
The value of your $signed_field_names variable should include "reference_number".
See pages 71 and 72 in their docs.
The issue likely occurs because iOS isn't rendering the "Open-Sans-Bold" font properly. Ensure the font file is correctly linked and provided in web-safe formats like .woff or .woff2. Explicitly set font-weight: bold; for the strong tag and include a fallback font-family, such as sans-serif, in case the primary font fails. Verify the font file is loading correctly in the browser's developer tools, as missing or incompatible files can cause this issue.
The different CapCut Cloud Space subscription options typically vary based on the amount of storage you get. If you subscribe to the cheapest option, you’ll likely get less storage space but still enjoy all the basic cloud benefits, like saving and accessing your projects across devices. If you're looking to use CapCut templates or work on more complex edits, consider how much storage you’ll need to avoid running out. You can always upgrade later if needed! You can also use modded version of capcut apk.
If you run into this error for adapter-static / SSG, you can detect the build process via building and avoid accessing page.url.searchParams while building:
<script>
import { building } from '$app/environment';
let results = $derived.by(() => {
const searchText = !building ? page.url.searchParams.get('searchText') || "" : "";
Thanks very much to Scott (Svelte Discord)
I didn't have provide a config file so the below line worked.
kafka-topics.sh --bootstrap-server localhost:9092 --topic first_topic --create --partitions 5 --replication-factor 1
The error you're getting suggests that the ST_GeomFromEWKT function is not available. This function is part of PostGIS, so it seems that either PostGIS is not properly installed or not enabled in your database.
Ensure PostGIS is properly installed and enabled in your database. You can check this by running the SQL command:
SELECT PostGIS_version();
If this returns a version number, PostGIS is installed and enabled.
Modify your SQLAlchemy class to use Geography instead of Geometry in python:
from geoalchemy2 import Geography
class Prueba(Base): # ... other columns ... punto = Column(Geography(geometry_type='POINT', srid=4326))
When inserting data, use the func.ST_GeogFromText function:
from sqlalchemy import func
prueba = Prueba( name="Prueba_2", age=5, created_at=datetime.now(), punto=func.ST_GeogFromText('POINT(-1.0 1.0)', 4326) )
with Session() as session: session.add(prueba) session.commit()
This approach should work with your PostGIS-enabled table in Supabase. It uses the ST_GeogFromText function to convert the WKT string to a geography object directly in the database.
Does normal timer not work? If I understand correctly, Dispatcher is something that ment to be working with UI thread, thus once your application has UI thread suspended (going into background) it stops working.
I would first suggest using typical timer:
System.Timers.Timer timer = new Timer{TimeSpan.FromSeconds(1).ToMilliseconds};
timer.Elapsed += (s, e) => {
//cancel activities
}
timer.Start();
There should also be some option to autorestart it.
I wonder if this works only for games, as the above examples. I am trying this with school software, and have added a YouTube URL, formatted as specified, about 4 weeks ago. The video is still not auto-playing.
I read somewhere else that supposedly, reasons could be
No idea if this is true, it's not mentioned in the google play console docs (which are not very detailed).
If you don't need it to be the same profile as your normal profile, you can get away with
Then, any settings or browser extensions you change on that profile will actually be saved instead of lost every time. This way doesn't require you to exit all your chrome tabs before debugging, and doesn't need any setting changes in Webstorm / IntelliJ.
Turns out that fakefs
cls.fake_fs().cwd
needs to be cast to str
on Windows. This is very likely to be a platform related as Simon mentioned in the comments after trying to reproduce the problem on his linux machine. In this case code cwd
needs to be set in this way:
cls.fake_fs().cwd = str(Path.cwd() / "gamedev_co")
How it is documented in Linux source:
/**
* ...
* @fuzz: specifies fuzz value that is used to filter noise from
* the event stream.
* @flat: values that are within this value will be discarded by
* joydev interface and reported as 0 instead.
* ...
*/
If anyone needs support for .nvmrc with nvm4w, I forked the project and added the feature here: https://github.com/epastorino/nvm-windows-with-nvmrc
Just compile the project and if you run "nvm.exe use" without a version argument, it will look for a version string in the .nvmrc file in current working directory.
Use android:maxLines="2"
combined with android:ellipsize="end"
<TextView
android:id="@+id/____"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginHorizontal="10dp"
android:layout_marginBottom="10dp"
android:ellipsize="end"
android:maxLines="2"
android:textColor="@color/black"
android:textSize="16sp"
app:layout_constraintBottom_toTopOf="@+id/_____"
app:layout_constraintEnd_toStartOf="@+id/_____"
app:layout_constraintStart_toEndOf="@id/startLocationPin"
app:layout_constraintTop_toTopOf="parent"
tools:text="Line 1\nLine 2" />
If this solution does not fit with your problem please let me know.
Since you mentioned Java, I will say this can be done via Spring Boot. What you are looking for is a multi-tenant setup. You can read the following article to get an idea.
Create tsconfig file using
npx tsc --init
After that add this to allow jsx in typescript.
{
"compilerOptions": {
"jsx": "react",
.....
}
Another way:
tsTABLE[] Tables = Enumerable.Range(0, 4).Select(t => new tsTABLE()).ToArray();
Take a look at https://stackoverflow.com/a/77465601/11927087.
As he told, add this to your package.json file
"resolutions": {
"wrap-ansi": "7.0.0",
"string-width": "4.1.0"
}
I've tested your exact solution and it worked, both the export with all 3 options, and returning to the Policy Transactions page without breaking the UI. Which cloud version are you on? And which browser? May I suggest closing studio and running 'gwb clean' and then try again? Tends to be useful when pcfs are not working properly.
To anyone coming here: the solution from mike works perfectly. Even updating in the visual studio Designer during designing. I created this account specifically to comment on this.
I compiled his GridHelper
class into a separate assembly and referenced it in my main assembly. After adding the xmlns reference it works exactly as he described. Link to the specific comment with the GridHelper
code:
https://stackoverflow.com/a/74534620/29094387
Make sure you define both the row and column just like mike's example. You can then use the standard Grid.RowSpan
and Grid.ColumnSpan
if needed.
Yes check below reference article for reference. You don't need to import packages spring boot does it for us automatically.
Many Spring Boot developers like their apps to use auto-configuration, component scan and be able to define extra configuration on their "application class". A single @SpringBootApplication
annotation can be used to enable those three features, that is:
@EnableAutoConfiguration
: enable Spring Boot’s auto-configuration mechanism@ComponentScan
: enable @Component scan on the package where the application is located (see the best practices)@Configuration
: allow to register extra beans in the context or import additional configuration classes@SpringBootApplication
annotation is equivalent to using @Configuration
, @EnableAutoConfiguration
, and @ComponentScan
with their default attributes, as shown in the following example:Try this, ensure the ul has a higher z-index than the menubar and both have a position set (e.g., relative, absolute).
Same here.
GLAD Website doesn't seem to give "KHR" and "glad" header files, When the tutorial says it will.
Does anybody know where to get these header files???
Initially, I encrypted the AMI using a different KMS key, and the AMI was shared across accounts as part of the EC2 builder setup.
Subsequently, when I implemented a multi-region setup within the same AWS account, I shared the AMI across regions by creating a new multi-region KMS key. However, the Auto Scaling Group (ASG) was unable to launch the EC2 instances, and I encountered the same error mentioned earlier. Referred AWS post here
To resolve this, I attached the service-linked role to the newly created KMS key, which successfully addressed the issue and enabled the ASG to launch the instances.
You need to run this command inside of the built-in terminal in VS Code.
pip3 install pyinputplus
If you run this in your mac terminal it will only install the package locally. VS Code runs in it's own virtual environment.
Let me know if that helps!
Thanks to @Ihora for the help. The names_glue
argument seems a bit more appropriate to me (instead of names_prefix
) in this situation. It no longer becomes necessary to use englue()
or ensym()
, since .value
is use: names_glue = "{.value}_{id}"
.
library(tidyverse)
data <- data.frame(
id = c(1, 2, 1, 2, 1, 2),
name = c("jane", "jane", "lauran", "lauran", "james", "james"),
month = c("april", "april", "may", "june", "june","june"))
pivot_data <- function (input_data, measure_1) {
input_data %>%
arrange(name, id) %>%
pivot_wider(
names_from = id,
names_glue = "{.value}_{id}",
values_from = {{measure_1}}
)
}
pivot_data(input_data = data,
measure_1 = month)
#> # A tibble: 3 Ă— 3
#> name month_1 month_2
#> <chr> <chr> <chr>
#> 1 james june june
#> 2 jane april april
#> 3 lauran may june
Created on 2025-01-07 with reprex v2.1.0
You can run a polling logic inside a useEffect hook and update a state variable to save the state. Refer the example mentioned in the below post.
Go to settings and search for 'Render Line Highlight'. Change the setting to 'None'.
for me worked:
If you are using iterm2 or similar terminal, just uncheck the following functionality.
Scroll wheel sends arrow keys when in alternate screen mode
.
I was having the same problem as you the solution for me was to find the binary location for the google-chrome and include it in the sb probably, this is what will work with you
chrome_binary_path = "/opt/google/chrome/google-chrome" with SB(uc=True, xvfb=True, binary_location = chrome_binary_path)
The default action for warning is "default" ("If a warning is reported and doesn’t match any registered filter then the “default” action is applied (hence its name)."), which means "print the first occurrence of matching warnings for each location (module + line number) where the warning is issued"
as suggested by M.O., I ended up placing make_config()
in a settings.py
file and importing the config from there wherever needed. This resolved the circular dependency issue and also made the app structure less convoluted.
Any suggestions are, of course, welcomed.
You should include the 'guidewire' tag for Guidewire-related questions, since it's the one being more actively monitored. Regarding the _PROP suffix in data dictionary, it refers to all the auto-generated virtual properties. If you check the generated java class for an entity in the data model, all those properties appear with the gw.internal.gosu.parser.ExtendedProperty annotation.
Check the tag list in SSML documentation. Certain tags will not work with Neural voices in TTS. If you want the widest outputs, don’t use Neural.
==> Twilio Account more info instagram - mr.mori_vinay WhatsApp: 9067954201
Add White List Number: Phone Numbers Manage > Verified Caller IDs > Add a new Caller ID button after select country > after add number > show OTP > then call then enter this OTP in your call 6 digits number OTP.
Create new TwiML App: Phone Numbers > Manage > TwiML App.
Click Top right Create new TwiML App Button Add Friendly Name like your app name.
After Voice Configuration add Request URL like this : https://voice-quickstart-server-nodes-7835.twil.io/make-call This Url only for demo here your function create URL add.
After Create button click then here your app create successfully.
Then Click Application name: COPY TwiML App SID : **********************************
Here we Update url after Save this TwiML App.
Make New API keys & Tokens: Click here this url then goto this page : https://console.twilio.com/us1/account/keys-credentials/api-keys
After Create API Key button click.
After Create new API keys then here add Friendly name after Create this API then open new automatic Copy secret key Screen.
Show this success message : You have successfully created a new API key in us1. Please make sure to copy the secret key from this page. SID: ********************************** Secret: ********************************** After here checkBox click : Got it! I have saved my API key SID and secret in a safe place to use in my application. Then Done Button Click.
Add Credentials : Click this URL: https://console.twilio.com/us1/account/keys-credentials/credentials
Credentials : here show two option public key and Push credentials so here click option two select Push credentials follow step 17.
Here click button : Create new Credential then open dialog.
This dialog inner add FRIENDLY NAME and TYPE and FCM SECRETE add. Example: Android & IOS (only type change) FRIENDLY NAME : pharmcrm-dev TYPE: FCM Push Credentials FCM SECRET:
Find FCM SECRET: Firebase Dev
COPY: CREDENTIAL SID (Dev) CREDENTIAL SID (Prod)
Services:
Here add all function code.
Here this all function are is protected so all the function is Public.
Like my number is +99999999999 so this number use our function and application.
After this number click and add function
Voice configuration : Click A call comes in
by default web hook selected then select Function.
Service: by default here Default select I am select my voice-quickstart-server-nodes.
Environment: UI select.
Function Path: here my /Incoming-call function select.
Save Configuration
Now Calling functionality Done.
—————————————————— Functions —————————————————
make-call
const AccessToken = require('twilio').jwt.AccessToken;
const VoiceGrant = AccessToken.VoiceGrant;
const VoiceResponse = require('twilio').twiml.VoiceResponse;
/**
Creates an endpoint that can be used in your TwiML App as the Voice Request Url.
In order to make an outgoing call using Twilio Voice SDK, you need to provide a
TwiML App SID in the Access Token. You can run your server, make it publicly
accessible and use /makeCall endpoint as the Voice Request Url in your TwiML App.
@returns {Object} - The Response Object with TwiMl, used to respond to an outgoing call
@param context
@param event
@param callback
*/
exports.handler = function(context, event, callback) {
// The recipient of the call, a phone number or a client
console.log(event);
const from = event.From;
let to = event.to;
if (isEmptyOrNull(to)) {
to = event.To;
if (isEmptyOrNull(to)) {
console.error("Could not find someone to call");
to = undefined;
}
}
const voiceResponse = new VoiceResponse();
if (!to) {
voiceResponse.say("Welcome, you made your first call.");
} else if (isNumber(to)) {
const dial = voiceResponse.dial({ callerId: from });
dial.number(to);
} else {
console.log(Calling [${from}] -> [${to}]);
const dial = voiceResponse.dial({ callerId: from, timeout: 30, record: "record-from-answer-dual", trim: "do-not-trim"});
dial.client(to);
}
callback(null, voiceResponse);
}
const isEmptyOrNull = (s) => {
return !s || s === '';
}
const isNumber = (s) => {
return !isNaN(parseFloat(s)) && isFinite(s);
}
———————————————————————————————————————————————
incoming-call
const AccessToken = require('twilio').jwt.AccessToken;
const VoiceGrant = AccessToken.VoiceGrant;
const VoiceResponse = require('twilio').twiml.VoiceResponse;
const axios = require('axios'); // Ensure axios is required
/**
Returns a specific string based on the toClient value.
@param {string} toClient - The client number to check.
@returns {string} - The corresponding string for the toClient value.
*/
const getClientSpecificString = (toClient) => {
switch (toClient) {
case '+999999999':
return 'valcare_789';
case '+665115166536512':
return 'asdsdkhdkahsdkashd';
case '+155478901':
return 'stringForClient1';
case '+155655478902':
return 'stringForClient2';
case '+356433543453':
return 'stringForClient3';
default:
return 'defaultString';
}
};
/**
Handles the call and dials a client or a number based on the To parameter.
@returns {Object} - The Response Object
@param context
@param event
@param callback
*/
exports.handler = async function(context, event, callback) {
// Get the fromNumber from the event
const fromNumber = event.From || event.Caller || event.CallerNumber;
// const storeNumber = event.extraOptions; // API Testing Store Number not get
console.log("From Number:", fromNumber);
// Get the toClient from the event
const toClient = event.To || 'valcare_123';
console.log("To Client:", toClient);
// Create a voiceResponse object
const voiceResponse = new VoiceResponse();
try {
// Call your API to get whitelisted numbers
const response = await axios.get('https://voice-quickstart-server-nodes-4119.twil.io/white-list');
const whitelistedNumbers = response.data;
console.log('Whitelisted Numbers:', whitelistedNumbers);
// Check if fromNumber is in the list of whitelisted numbers
const isWhitelisted = whitelistedNumbers.includes(fromNumber);
if (isWhitelisted) {
// Get the specific string for the given toClient
const specificString = getClientSpecificString(toClient);
console.log('Specific String for toClient:', specificString);
// Dial the client if fromNumber is whitelisted
const dial = voiceResponse.dial({
callerId: fromNumber,
timeout: 30,
record: "record-from-answer-dual",
trim: "trim-silence"
});
dial.client(specificString);
} else {
console.log('Non-whitelisted Call:');
console.log('From Number:', fromNumber);
console.log('To Client:', toClient);
// Dial the number if fromNumber is not whitelisted
const dial = voiceResponse.dial({
callerId: fromNumber
});
dial.number('9067954201'); // Vinay Mori
// dial.number(storeNumber); // API fetching store No. not get
}
callback(null, voiceResponse.toString());
} catch (error) {
console.error('Error fetching whitelisted numbers:', error);
// Handle the error accordingly
callback(error);
}
};
const isEmptyOrNull = (s) => {
return !s || s === '';
}
const isNumber = (s) => {
return !isNaN(parseFloat(s)) && isFinite(s);
}
———————————————————————————————————————————————
white-list
const twilio = require('twilio');
// Your Twilio Account SID and Auth Token
const accountSid = '**********************************';
const authToken = '**********************************';
// Create a Twilio client
const client = new twilio(accountSid, authToken);
exports.handler = async function(context, event, callback) {
try {
// Fetch the list of verified caller IDs
const callerIds = await client.outgoingCallerIds.list();
// Extract the verified caller IDs
const verifiedCallerIds = callerIds.map(callerId => callerId.phoneNumber);
console.log('Verified Caller IDs:', verifiedCallerIds);
// Return the list of verified caller IDs as JSON
callback(null, verifiedCallerIds);
} catch (error) {
console.error('Error fetching verified caller IDs:', error);
callback(error);
}
}
———————————————————————————————————————————————
generate-access-token
const twilio = require('twilio');
exports.handler = function (context, event, callback) {
const TWILIO_ACCOUNT_SID = '**********************************';
const TWILIO_API_KEY = '**********************************';
const TWILIO_API_SECRET = '**********************************';
const TWILIO_APPLICATION_SID = '**********************************';
const ANDROID_PUSH_CREDENTIAL_SID_DEV = '**********************************';
const ANDROID_PUSH_CREDENTIAL_SID_PROD = '**********************************';
const IOS_PUSH_CREDENTIAL_SID_SANDBOX = '**********************************';
const IOS_PUSH_CREDENTIAL_SID_PRODUCTION ='**********************************';
const identity = event.identity;
const platform = event.platform;
const isProduction = event.isProduction;
if (!identity || !platform) {
return callback(null, {
statusCode: 400,
message: 'Identity and platform are required',
});
}
try {
const AccessToken = twilio.jwt.AccessToken;
const VoiceGrant = AccessToken.VoiceGrant;
let pushCredentialSid = null;
if (platform === 'ios') {
pushCredentialSid = isProduction ? IOS_PUSH_CREDENTIAL_SID_PRODUCTION : IOS_PUSH_CREDENTIAL_SID_SANDBOX;
} else if (platform === 'android') {
pushCredentialSid = isProduction ? ANDROID_PUSH_CREDENTIAL_SID_PROD : ANDROID_PUSH_CREDENTIAL_SID_DEV;
} else {
return callback(null, {
statusCode: 400,
message: 'Invalid platform',
});
}
const voiceGrant = new VoiceGrant({
outgoingApplicationSid: TWILIO_APPLICATION_SID,
pushCredentialSid: pushCredentialSid,
incomingAllow: true,
});
const token = new AccessToken(TWILIO_ACCOUNT_SID, TWILIO_API_KEY, TWILIO_API_SECRET, { identity });
token.addGrant(voiceGrant);
const jwtToken = token.toJwt();
console.log('jwtToken is here :', jwtToken);
return callback(null, {
token: jwtToken,
});
} catch (error) {
return callback(null, {
statusCode: 500,
message: "Unable to generate token: ${error.message}",
});
}
};
———————————————————————————————————————————————
server
const twilio = require('twilio');
exports.handler = function (context, event, callback) {
const TWILIO_ACCOUNT_SID = '**********************************';
const TWILIO_API_KEY = '**********************************';
const TWILIO_API_SECRET = '**********************************';
const TWILIO_APPLICATION_SID = '**********************************';
const ANDROID_PUSH_CREDENTIAL_SID_DEV = '**********************************';
const ANDROID_PUSH_CREDENTIAL_SID_PROD = '**********************************';
const IOS_PUSH_CREDENTIAL_SID_SANDBOX = '**********************************';
const IOS_PUSH_CREDENTIAL_SID_PRODUCTION ='**********************************';
const identity = event.identity;
const platform = event.platform;
const isProduction = event.isProduction;
if (!identity || !platform) {
return callback(null, {
statusCode: 400,
message: 'Identity and platform are required',
});
}
try {
const AccessToken = twilio.jwt.AccessToken;
const VoiceGrant = AccessToken.VoiceGrant;
let pushCredentialSid = null;
if (platform === 'ios') {
pushCredentialSid = isProduction ? IOS_PUSH_CREDENTIAL_SID_PRODUCTION : IOS_PUSH_CREDENTIAL_SID_SANDBOX;
} else if (platform === 'android') {
pushCredentialSid = isProduction ? ANDROID_PUSH_CREDENTIAL_SID_PROD : ANDROID_PUSH_CREDENTIAL_SID_DEV;
} else {
return callback(null, {
statusCode: 400,
message: 'Invalid platform',
});
}
const voiceGrant = new VoiceGrant({
outgoingApplicationSid: TWILIO_APPLICATION_SID,
pushCredentialSid: pushCredentialSid,
incomingAllow: true,
});
const token = new AccessToken(TWILIO_ACCOUNT_SID, TWILIO_API_KEY, TWILIO_API_SECRET, { identity });
token.addGrant(voiceGrant);
const jwtToken = token.toJwt();
console.log('jwtToken is here :', jwtToken);
return callback(null, {
token: jwtToken,
});
} catch (error) {
return callback(null, {
statusCode: 500,
message: "Unable to generate token: ${error.message}",
});
}
};
——————————————————————————————————————————————————————
Why not just doing:
SELECT
date_trunc('week', EXTRACT(YEAR FROM CURRENT_DATE) || '-01-04')::date
AS first_monday_of_iso_year;
Update with material ui v6
<ListItem
sx={{
minWidth: "fit-content" // <-- only add this style
}}
<ListItemText primary="as usual" />
</ListItem>
was trying to go on Google math.net but nothing isn't working I was watching this YouTube video I guess nothing happened so I was confused thinking he was just telling a lie on school Chromebook because tps blocked every web tab or whatever but Google maths is working or net is a scam because if I was told he playing it he most be a home everyhing unblock I don't know
Currently having an ESM-Project that shall read text-like files (html / handlebars) - I have activated support for require() just for the case of loading those files.
This is working for me:
import Module from "node:module";
const require = Module.createRequire(import.meta.url);
const templates={
test: require('./test.handlebars')()
}
console.log('File output: ', templates.test)
Same issue here. Started to happens today. My page is server-side rendered and its super light. I dont't know how to improve
I had the same problem after a system upgrade and fixed it by patching textblob_de. My pull request with the changes is here, if you need a quick fix as well: https://github.com/markuskiller/textblob-de/pull/25
Something like this is what ended up working for me:
load("@npm//:eslint/package_json.bzl", eslint_bin = "bin")
eslint_bin.eslint_test(
name = "lint",
chdir = package_name(),
data = glob(["src/**/*.ts", "src/**/*.tsx"]) + [".eslintrc.cjs"],
args = ["."],
)
CPMpy constraints are CPMpy Expressions, with the requirement that they are boolean. Each expression can have nested subexpressions. Any operation comparing 2 constraints will return a new constraint. Thus, you can save the first constraint in an object c
and then use a loop to create the disjunction with c | new_constraint
.
In your example, based on the 'this is want I want in the end' lines of code, this can be done as follows:
# Assume we have the following list of lists for mask
foo = [[1,1], [3,2], [5,0]]
c = False # Just to create an expression which won't play a role in the disjunction
for f in foo:
c |= (cp.abs(cp.sum(p[f])) == 3)
m += c
(Notice that I use cp.abs and cp. sum)
A more elegant way to do it would be using the cp.any() expression, which basically is what I understood you want.
m += cp.any(cp.abs(cp.sum(p[f])) == 3 for f in foo)
This creates the following constraint (with the foo list of list I used):
or([abs(sum([puzzle[1,0], puzzle[1,1], puzzle[1,2], puzzle[1,3], puzzle[1,4], puzzle[1,5], puzzle[1,0], puzzle[1,1], puzzle[1,2], puzzle[1,3], puzzle[1,4], puzzle[1,5]])) == 3, abs(sum([puzzle[3,0], puzzle[3,1], puzzle[3,2], puzzle[3,3], puzzle[3,4], puzzle[3,5], puzzle[2,0], puzzle[2,1], puzzle[2,2], puzzle[2,3], puzzle[2,4], puzzle[2,5]])) == 3, abs(sum([puzzle[5,0], puzzle[5,1], puzzle[5,2], puzzle[5,3], puzzle[5,4], puzzle[5,5], puzzle[0,0], puzzle[0,1], puzzle[0,2], puzzle[0,3], puzzle[0,4], puzzle[0,5]])) == 3])]
I find the easiest way to get an instance to text is to create a calculated field with the type of Concatenate Text. Just pick the field as the only input.
So in this case you would see this:
Got it. The trick is to have the library code use an ambient types from a d.ts file (probably nested under a namespace, but you do you), and then have the application code override that type in their own d.ts file.
Library:
// library-namespace.d.ts
namespace MyLibrary {
type EventName: string; // we don't care in the lib. String is fine.
}
Application:
// library-overrides.d.ts
namespace MyLibrary {
type EventName: 'This' | 'That'; // app-specific event types
}
Since this is a new feature, Google Sheets Apps Script doesn't currently provide a direct way to create a Sheet Table
. There is a pending Issue Tracker Post related to this. I suggest hitting the +1
button to signify that you also have the same issue/request and consider adding a star (on the top left) for Google developers to prioritize the issue.
Reference: Issue Tracker
What I did when this happened was, while the resource creation was in progress, I went to ECS in the AWS Console, opened up the cluster with the problem and went to the "Tasks" tab. In there, you can click on a failed task and a message should pop up at the top saying why the task failed to initialize.
https://repost.aws/knowledge-center/ecs-service-stuck-update-status also says you can look at the "Events" tab for the cluster's service. The events view didn't show me any errors, but supposedly sometimes it does.
I am just about to start using this service after creating a BearerToken. Testing in Postman with the same data but I'm getting a response of Error 418 I'm a teapot (RFC 2324). The API and token are correct because I get a reponse when requesting our current orders - https://api.parcel.royalmail.com/api/v1/orders
Here is the Postman JSON:-
{
"items": [
{
"recipient": {
"address": {
"fullName": "Tom",
"companyName": "Test",
"addressLine1": "150",
"addressLine2": "Valley Close",
"addressLine3": "Elmdom",
"city": "Birmingham",
"county": "West Midlands",
"postcode": "B12 2YT",
"countryCode": "GB"
},
"emailAddress": "[email protected]"
},
"billing": {
"address": {
"fullName": "Tom",
"companyName": "Test",
"addressLine1": "150",
"addressLine2": "Valley Close",
"addressLine3": "Elmdom",
"city": "Birmingham",
"county": "West Midlands",
"postcode": "B12 2YT",
"countryCode": "GB"
},
"phoneNumber": "42425 5252552",
"emailAddress": "[email protected]"
},
"orderDate": "2025-01-07T14:15:22Z",
"subtotal": "0",
"shippingCostCharged": "0",
"total": "0"
}
] }
I know it's been a while but I don't suppose anyone would know why I get an 418 error, it isn't even one of the returned errors listed in the docs for this API call
Thanks
I am new to mac but i tried this thing
sudo chown -R $(whoami) /opt/homebrew/var/mysql

brew services start [email protected]
It started working fine
Semaphores require you to submit to GPU using vkQueueSubmit(), which is an expensive operation. Having fewer vkQueueSubmit() calls instead of too many is always going to be better.
Moreover, barriers allow for a more fine control over synchronization, giving you better optimization opportunities. vkQueueSubmit() is more suited for frame graphs where there are multiple passes that can execute in parallel and have dependencies on other passes.
when I write the following code:
function open(isPacked: boolean): JSX.Element { console.log(isPacked, 'to do'); return ( {isPacked ? hello : wewe} ); }
registration
I encounter the following error: python Type 'Element' is not assignable to type 'MouseEventHandler'. Type 'Element' provides no match for the signature '(event: MouseEvent<HTMLButtonElement, MouseEvent>): void'. The reason for this error is that the open function expects a boolean parameter (isPacked: boolean), but I am trying to assign it directly to the onClick event handler of the button. The event handler (defined as event: React.MouseEvent) automatically receives a MouseEvent as its parameter instead of a boolean value. To fix this, I need to modify the onClick handler to call open properly, like this: javascript <button onClick={() => open(false)}>registration This way, I am creating an inline arrow function that correctly calls open with a boolean argument when the button is clicked.
I was using appium_python_client 2.0.0, I tried uninstalling and installing it back but that did not fix the issue. Downgrading it to 1.3.0 fixed it
Server:
IIS ASP.Net Core 8.0,
Microsoft.AspNetCore.SignalR 1.1.0,
Microsoft.AspNetCore.SignalR.Core 1.1.0,
Angular,
Javascript successfully connected SignalR to server.
Client 1:
IIS ASP.NET 4.8,
Microsoft.AspNetCore.SignalR.Client 8.0.0.0,
Microsoft.AspNetCore.SignalR.Client.Core 8.0.0.0,
successfully connected SignalR to server.
Client 2:
Windows C# Desktop UI App .NET Framework 4.8,
Microsoft.AspNetCore.SignalR.Client 8.0.0.0,
Microsoft.AspNetCore.SignalR.Client.Core 8.0.0.0,
Failed connected SignalR to server.
Connection = new HubConnectionBuilder()
.WithUrl("http://localhost/QMOSRFactor/rfactor",
options => {
options.Transports = HttpTransportType.WebSockets;
})
.WithAutomaticReconnect()
.Build();
after call: await Connection.StartAsync(); nothing happened. No error in Visual Studio Output. No exceptions.
If I wrappered the above code in netstandard2.0 library. Reference this library in Client 2 project. Same nothing happened. No error in Visual Studio Output. No exceptions.
Looks like class
com.facebook.react.bridge.JSIModulePackage
was removed starting from react-native version 0.75.0-rc.0 some 7 month ago. As far as i understand it was replaced with
com.facebook.react.ReactPackage
corresponding changes to
com.nozbe.watermelondb.jsi.WatermelonDBJSIPackage
where made in v0.28.0-1 2 month ago so, in theory, wmDb v0.28.0 must be compatible with the latest react native, but it is not clear when it happens.
I wish i new that 2 days ago.
A couple of things you can try.
This appears to be Microsoft's troubleshooting article related to this issue https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/install/windows/restore-missing-windows-installer-cache-files
You should call WidgetsFlutterBinding.ensureInitialized();
before await Firebase.initializeApp
.
Here is the full documentation link for Firebase
with Flutter
.
This package aad_oauth
has a dependency name flutter_secure_storage
And
flutter_secure_storage
have a native method name delete
So I guess when you use oauth.login()
or oauth.getAccessToken()
, the native method delete
will be called, but somehow in native code, this method didn't exist, that's why you got the error
MissingPluginException(No implementation found for method delete on channel plugins.it_nomads.com/flutter_secure_storage)
So let's search keyword to check whether your project downloaded the dependency success
Native method "delete" on Android:
If your project doesn't have them, you may try clean flutter or pub cache, then re-sync the dependencies.
YEAR(DATEADD(DAY, 4 - ((DATEPART(WEEKDAY, @date) + @@DATEFIRST - 2) % 7 + 1), @date))
As of the writing of this answer (January 7, 2025), there is a pending Issue Tracker Post which is a feature request related to this post. I suggest hitting the +1
button to signify that you also have the same issue/request.
Also, according to the latest comment (#106):
From the product team:
Would be really useful to understand at least a few specific use cases businesses might have for building automation on top of Smart Chips. Any examples?
Please share here or at https://forms.gle/XQYP6NxTVp9pPkqm6.
I advise that you submit an entry to the form as well as I believe the team handling the Issue Tracker Post is gathering more information.
I am using the same loop like above. Could somebody give me any advice, why this loop "stucks" nearly 15 minutes and then suddendly it continues?
The answer is now the question iteself. I edited the question to make sense and it worked, so that is the answer. lol. Idk if I should delete this question or not, if it answers itself.
On the t-sql pivot is little bit tricky. Please check whether this code work?
CREATE TABLE holiday (
Region VARCHAR(100),
Date VARCHAR(100)
);
INSERT INTO holiday (Region, Date)
VALUES
('Scotland' ,'2019-01-01'),
('Scotland' ,'2019-01-03'),
('Scotland' ,'2019-01-04'),
('England-and-Wales' ,'2019-01-01'),
('England-and-Wales' ,'2019-01-02'),
('England-and-Wales' ,'2019-01-05'),
('Northern-Ireland' ,'2019-01-05')
SELECT
Date,[Scotland] AS Scotland,[England-and-Wales] AS [England-and-Wales],[Northern-Ireland] AS [Northern-Ireland]
FROM
(SELECT Date, Region, 1 AS Value FROM holiday ) AS SourceTable
PIVOT (MAX(Value) FOR Region IN ([Scotland], [England-and-Wales], [Northern-Ireland])
) AS PivotTable
Result:
Date Scotland England-and-Wales Northern-Ireland
2019-01-01 1 1 null
2019-01-02 null 1 null
2019-01-03 1 null null
2019-01-04 1 null null
2019-01-05 null 1 1
I've experienced the same issue and changing these emulator settings fixed it.
OpenGL ES renderer: SwiftShader OpenGL ES API lebel: Compatibility (OpenGL ES 1.1/2.0)
The emulators are running with Android API VanillaIceCream
Seems like, I found the answer. Thanks to the article
Below is a quote from the article. I changed it a bit to use time zones used in my example.
When the AT TIME ZONE operator is applied to a timestamp, it assumes the stored value in the specified time zone (UTC in the above query) and converts it into the client time zone i.e. Europe/Moscow.
This natuarally gives an answer to the Question 2.
The value for timestamp without time zone
is stored as is.
INSERT ... VALUES (DEFAULT)
stored 2025-01-07 15:25:33.452 +0300
as 2025-01-07 15:25:33.452
Then SELECT no_tz AT TIME ZONE 'UTC' AS "no_tz as UTC",
added 3 hours to 15, and it became 2025-01-07 18:25:33.452 +0300
in Europe/Moscow
time zone.
yeah ... the more I know the more I understand how little I know
You need to acquire/release the semaphore permit on the processing thread, thus in the task, not outside.
Vertex AI Freeform does not provide options for direct keyboard customization within its prompt interface. The primary emphasis of Freeform lies in prompt engineering and engaging with large language models via text, images, audio, or video prompts. There is no inherent capability to alter keyboard shortcuts or behaviors that pertain specifically to the prompt input area. Modifications to keyboard behavior would have to be managed at the operating system level or through browser extensions.
If you are looking for this feature to implement a raise request in an issue tracker with a complete description of the issue, the Engineering team will look at the issue for future implementation.
The problem originated from the way the sockets are configured in the python files, as the different namespaces used for the project were not specified on each socket.on(...)
or emit(...)
method.
Python files after fix:
# Example from main.py file
socketio.on_event('getter', jobs.getter, namespace='/jobs')
# Example from jobs.py file
emit('error', json.dumps({"status_code": r.status_code}), namespace='/jobs')
I also specified the namespaces on the Angular files as such:
// class used to handle the sockets of the '/jobs' namespace
export class SocketJobs extends Socket {
constructor() {
super({ url: socketio_url + '/jobs', options: { autoConnect: false } });
}
}
// more code //
this.socket.emit('getter', {
// json //
});
7 years later, but
instead of redefining the function as the person above suggested (and thanks for the suggestion, that's how I got my solution), it is easier to just overwrite the neotree variable neo-default-system-application
, on your config files:
(defvar neo-default-system-application "start")
on MacOs I changed it to "open" and everything is working as I expected
I was running into this today. Select statement by itself took 6 seconds to return 10 rows. As part of an insert into an empty table it took 40 plus seconds. Ended up putting a WITH(NOLOCK) hint on the select statement and that got me right back to 6 seconds.
You can force a context transition using CALCULATE as shown below.
Valid = CALCULATE(
var h = min(HOUR([timestamp_high]))
var l = min(Hour([timestamp_low]))
return IF(h > l,"Bad","Good")
)
Answer: simply open the port of the security groups on both side EC2 and RDS. go to security groups>edit inbound rules>select mysql, port 3306,source anywhere-ipv4.
The problem with delete job artifacts API is that it does not work with JOB TOKENs. Any idea of how to do it?
In case someone experience the same issue and above directives are not enough, you will need to use "proxy_ssl_name name" - https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_name:
proxy_ssl_server_name on; #mandatory
proxy_ssl_protocols TLSv1.3; #sometimes is needed
proxy_ssl_name myfqdn.com;
Why?
Sometimes proxy_set_header Host myfqdn.com;
is not enough to satisfy upstream group server's SNI.
We help you to reactivate your account with a small FEE. Our app had the same issue and was solved by us, you can see in video that our app was in review and now it's live, in Google Play store, to check click on below link : https://play.google.com/store/apps/details?id=com.grafto.artworkgrafix.festivalposter
Same here. Upgrading from React 16 to React 19, I am getting
React is not defined
ReferenceError: React is not defined
Everywhere where React was not imported. This should not be the case with React 19 anymore. I guess the issue is during the migration to React 19 from React 16 and it is somehow internal.
Any help very much appreciated!
tm* GlobalTime;
mktime(GlobalTime);
int DaysThisYear;
DaysThisYear=GlobalTime->tm_yday;
Thanks @mkrieger1 I managed to fix my code and it's perfectly working now .
also a big thanks to @ggorlen The turtle move and being able to shoot several bullets at once is amazing and definitely what I wanted to complete next. You give me a big help that I can study and work on.
Thanks to both of you for your precious help.
GeoJSON was effective in producing the contours I needed.
Is it always the case for Teams channel?
top ! thank you for the answer !
To anyone having this same issue, the answer is to set a background(it can be transparent) to the datagrid. That makes the flyout clickable and showable all over. Also, instead of using RowSTyle:
<Style TargetType="dataGridControl:DataGridRow">
<Setter Property="dataGridControl:DataGridRow.ContextFlyout">
Use:
<Style TargetType="dataGridControl:DataGrid">
<Setter Property="dataGridControl:DataGrid.ContextFlyout">
<Setter.Value>
Your design looks like CRTP since Child_Type1
inherits fro a base class parameterized by Child_Type1
itself but without fully adopting the principles of this pattern, ie. one should only pass the derived class to the base class and not other stuff (like T
and Child_Type1<T>::Inner
in your example).
If you fully follow the CRTP, you should be able to write:
template <class T>
class Child_Type1 : public Base<Child_Type1<T>> {};
and then try to recover in Base
the types you need, ie. T
and Child_Type1<T>::Inner
Note that you can recover T
from Child_Type1<T>
with a little helper (see also this:
template <typename T>
struct get_T;
template <template <typename> class X,typename T>
struct get_T<X<T>> { using type = T; };
Following the answer of @Jarod42 and using this trick, you could have something like:
////////////////////////////////////////////////////////////////////////////////
// see https://stackoverflow.com/questions/78048504/how-to-write-a-template-specialization-of-a-type-trait-that-works-on-a-class-tem
template <typename T>
struct get_T;
template <template <typename> class X,typename T>
struct get_T<X<T>> { using type = T; };
////////////////////////////////////////////////////////////////////////////////
template <typename T>
struct Inner;
////////////////////////////////////////////////////////////////////////////////
template <class DERIVED>
class Base
{
public:
using T = typename get_T<DERIVED>::type;
using inner = Inner<DERIVED>;
virtual void test(void) = 0;
};
////////////////////////////////////////////////////////////////////////////////
template <class T>
class Child_Type1 : public Base<Child_Type1<T>>
{
public:
Base<Child_Type1<T>>::inner * var;
void test(void) override { }
};
template <typename T> struct Inner<Child_Type1<T>> { T some_variable; };
////////////////////////////////////////////////////////////////////////////////
int main()
{
Child_Type1<int> c1 = Child_Type1<int>();
}
check the working directory. go to project properties > run > brows working directory. then select the root directory of the project. it worked for me
Hope that helps
friend! I noticed the same problem, only on the PC I am writing a program in Delphi. Everything seems to be correct, I tried adding additional time delays and buffer cleanup to different places in the PC and arduino programs, but I still haven't figured out what the problem is. Everything is exactly the same as you wrote - after turning on arduino, the first package from the program on the PC does not reach it, but if you restart the program on the PC, the first package begins to reach it. The only solution I found was to send the first package from the PC twice.
Has anybody found any answer to this question. or has the author found solution. If yes please post the solution here
Thanks it works! I changed slightly the unmarshaler to avoid marshaling and unmarshaling the same data.
func (c *Config) UnmarshalYAML(value *yaml.Node) error {
var rawConfig struct {
Rules []map[string]yaml.Node `yaml:"rules"`
}
if err := value.Decode(&rawConfig); err != nil {
return err
}
c.Rules = make([]Rule, 0, len(rawConfig.Rules))
for _, rule := range rawConfig.Rules {
for ruleType, ruleData := range rule {
var rule Rule
switch ruleType {
case "typeA":
var typeA TypeA
if err := ruleData.Decode(&typeA); err != nil {
return err
}
rule = typeA
case "typeB":
var typeB TypeB
if err := ruleData.Decode(&typeB); err != nil {
return err
}
rule = typeB
default:
return fmt.Errorf("unknown rule type: %s", ruleType)
}
c.Rules = append(c.Rules, rule)
}
}
return nil
}
func main() {
var yamlFile = []byte(`
rules:
- typeA:
tic: 1
plop: 4
- typeB:
tic: 1
tac: 3
plip: 4
- typeA:
tic: 5
plop: 7
`)
var config Config
err := yaml.Unmarshal(yamlFile, &config)
if err != nil {
log.Fatalf("Error when parsing YAML file: %v", err)
}
for _, rule := range config.Rules {
fmt.Println(rule.Type(), rule)
}
}