Encircle the largest number or smallest number In Java
Refer this video for detailed logic on this.
I have tag names with pattern; YYYYYMM.{4 digits autoincrement}, e.g;
I want to keep the track of tags on;
last two months
in case the branch does not receive any commit within two months, keep the ten tags
git tag | sort -n | grep -v "$(date +%Y%m)" | grep -v "$(date --date='-1 month' +%Y%m)" | head -n -10 | xargs -I % git push -d origin %
You need to reintall the dependencies. Just follow the given steps:
Step I:
npm install @react-navigation/native @react-navigation/native-stack
Step II:
npm install react-native-screens react-native-safe-area-context
Step III:
cd ios && pod install && cd ..
Thanks to the MultiClick article that helped me to fix my issue.
Spigot addressed this concern and said following:
As of 1.18+, the main spigot.jar is now a bootstrap jar which contains all libraries. You cannot directly depend on this jar. You should depend on Spigot/Spigot-API/target/spigot-api--SNAPSHOT-shaded.jar, or the entire contents of the bundler directory from your server, or use a dependency manager such as Maven or Gradle to handle this automatically.
Now, i am fully aware that he is running version 1.17, so it might be something else but this is important since most people wont think about reading that section.
you can find that quote from their Buildtools Frequently asked questions part. BuildTools
Honestly, i'd just make one entity and then add an extra attribute called "type"
use "npx gltfjsx public/model.glb -o src/components/ComponentName.jsx -r public"
strtok3 is an ECMAScript modules (ESM). In a CommonJS (CJS) project (it looks like that is what you have) can use dynamic import to load an ESM module.
(async () => {
const strtok3 = await import('strtok3');
})();
I will demonstrate how to do that. I use URLs instead of module names here, as I cannot import from local installed dependencies.
const strtok3 = 'https://cdn.jsdelivr.net/npm/[email protected]/+esm';
const token_types = 'https://cdn.jsdelivr.net/npm/[email protected]/+esm';
async function run() {
const {fromBuffer} = await import(strtok3);
const {UINT32_BE} = await import(token_types);
const testData = new Uint8Array([0x01, 0x00, 0x00, 0x00]);
const tokenizer = fromBuffer(testData);
const number = await tokenizer.readToken(UINT32_BE);
console.log(`Decoded number = ${number}`);
}
run().catch(err => {
console.error(`An error occured: ${err.message}`);
})
But you are using TypeScript, and gives additional challenge, as the TypeScript compiler does not respect the dynamic import, in CJS project.
import {loadEsm} from 'load-esm';
(async () => {
const strtok3 = await loadEsm<typeof import('strtok3')>('strtok3');
})();
As per StackOverflow policies I need disclose that I am the owner of all the used dependencies: strtok3, token-types and load-esm.
But rather then get this to work in your CJS project, it better to migrate your project to ESM. In your ESM project, you more easily load both ESM and CJS dependencies.
Avoid using pre-compiled headers, it obscures your project's dependencies and your understanding of the dependencies.
Thanks @Zeros-N-Ones!
This works to find the record to update. My next issue is the updateContact() (at the bottom), which fails. Any additional help will be greatly appreciated.
if (contact) {
Logger.log("Contact found");
const updatedContact = { // Create a *new* contact object
names: [{
givenName: personData.firstName || (contact.names && contact.names.length > 0 ? contact.names[0].givenName : ""),
familyName: personData.lastName || (contact.names && contact.names.length > 0 ? contact.names[0].familyName : "")
}],
phoneNumbers: [], // Initialize phoneNumbers as an empty array
emailAddresses: [], // Initialize emailAddresses as an empty array
organizations: [], // Initialize organizations as an empty array
addresses: [], // Initialize addresses as an empty array
birthdays: contact.birthdays ? [...contact.birthdays] : []
};
Logger.log("updatedContact created");
// Update other fields - phone numbers, email, organizations, addresses, and birthdays
if (personData.homePhone) {
updatedContact.phoneNumbers.push({ value: personData.homePhone, type: "Home" });
}
if (personData.mobilePhone) {
updatedContact.phoneNumbers.push({ value: personData.mobilePhone, type: "Mobile" });
}
if (personData.email) {
updatedContact.emailAddresses.push({ value: personData.email, type: "Personel" });
}
if (personData.company) {
updatedContact.organizations.push({ name: personData.company });
}
if (personData.address) {
updatedContact.addresses.push({ formattedValue: personData.address });
}
if (personData.birthdate) {
try {
const parsedDate = parseDate(personData.birthdate);
if (parsedDate) {
const birthday = People.newBirthday();
const date = People.newDate();
date.year = parsedDate.year || null;
date.month = parsedDate.month || null;
date.day = parsedDate.day || null;
birthday.date = date;
updatedContact.birthdays = [birthday];
} else {
Logger.log("Warning: Invalid birthdate format: " + personData.birthdate);
}
} catch (error) {
Logger.log("Error setting birthdate: " + error);
Logger.log("Error Details: " + JSON.stringify(error));
}
}
Logger.log("Contact object BEFORE update: " + JSON.stringify(updatedContact, null, 2));
var updatePersonFields = "updatePersonFields=names,emailAddresses,phoneNumbers,addresses,organizations,birthdays";
const finalContact = People.People.updateContact(updatedContact, resourceName, {updatePersonFields: "names,emailAddresses,phoneNumbers,addresses,organizations,birthdays"});
What do you mean about does not load? Do the request fail or do you get the old code?
I'm also wondering if you're using "outputHashing": "all" in angular.json?
Forwarding the respective port worked for me e.g. cap run android --forwardPorts 5173:5173
Problems like this still exist in 2024, and this was the closet post I found to my failing search seeking a way to help me resolve a path with any number of unknown symbolic links. So, I offer my hack leveraging (Get-Item ).Target, working through the syntactic pain, in case it is a helpful starting point for someone else. Note: I only tested with "mklink /d" symbolic folders in the path.
PowerShell command lines to demo resolving input $Path in place ($Path is “resolved” as $Path, $DIR and $DIRs is stolen for scratch space):
$Path,$DIRs=(Resolve-Path $Path).Path.Split("\");
while($null -ne $DIRs){ $DIR,$DIRs=$DIRs; $Path=$Path+"\"+$DIR; $DIR=(Get-Item $Path).Target; if ($DIR.GetType().Name -eq "String[]"){$Path=$DIR[0]}; };
Batch command to use this gets more complex, having to escape some text and demoes here with input/output as %CDResolvePath% since %Path% is reserved in batch context:
for /f "delims=" %%a in (
'powershell -command "$Path,$DIRs=(Resolve-Path '"%CDPathResolved:)=^^^)%'").Path.Split('"\'");
while($null -ne $DIRs){ $DIR,$DIRs=$DIRs; $Path=$Path+'"\'"+$DIR; $DIR=(Get-Item $Path).Target; if ($DIR.GetType().Name -eq '"String[]'"){$Path=$DIR[0]}; } $Path;"'
) do set "CDPathResolved=%%a"
Batch notes: the “for loop” just gets back the output returned by the “ $Path;” at the end of the PowerShell which is “write-output”. Injection of single quotes are to escape the double quotes and pass them through to Powershell. The batch “String Replace” syntax “:)=^^^)” on the input CDPathResolved is needed to escape and pass Powershell any “)” in a pathname as “^)” since “Program Files (x86)” in file paths broke things.
Use case: I'd a build failing when I was forced to move my Jenkins build project with "mklink /d" to another drive. I worked around by setting my Current Working Path to resolved before kicking off "node.exe", though I later diagnosed that Angular’s "ng https://angular.dev/cli/build" has a handicap addressed by "preserve-symlinks" (or is it “node.exe” that is challenged? I’m not well enough educated on these matter to distinguish, and I don’t care anymore to learn more). So one could contemplate my case to see how my hack applies, but then perhaps even find out about the node/angular switches or some other context may with similar options that might more cleanly work around your case before going down the rabbit hole like me.
The redirect_uri parameter may refer to the OAuth out-of-band (OOB) flow that has been deprecated and is no longer supported. This documentation explains how the redirect_uri determines how Google’s authorization server sends a response to your app. You can also refer to the migration guide for instructions on updating your integration.
Also, I found this post that has the same concern as yours, which might be helpful to you.
for git bash, add/modify like that in c:/Users/YOUR_NAME/.bash_profile:
export PATH="$HOME/AppData/Roaming/pypoetry/venv/Scripts:$PATH"
Ok, I found a solution. Instead an otoco trade, I needed an oto one
response = requests.post(f"{base_url}/sapi/v1/margin/order/oto", headers=headers, params=params)
where there is no need to provide a stop loss. So the params would look like this:
params = {
"symbol": "BTCUSDT",
"isIsolated": "FALSE",
"sideEffectType": "MARGIN_BUY",
"workingType": "LIMIT",
"workingSide": "BUY",
"workingPrice": 80000,
"workingQuantity": 0.0002,
"workingTimeInForce": "GTC",
"pendingType": "LIMIT",
"pendingSide": "SELL",
"pendingQuantity": 0.0002,
"pendingPrice": 110000,
"pendingimeInForce": "GTC",
'timestamp': int(time.time() * 1000),
}
This allows a limit-type as a follow-up order.
I had the Same Proplem, Just Save your File before Running the Code , Press Ctrl + S and then run the code back in terminal in this form "node code.js" and its going to work inshallah
While the accepted answer works(*), I think that there is a simpler solution:
install.packages(c("ada", "ipred", "evd"))
install.packages("http://cran.r-project.org/src/contrib/Archive/RecordLinkage/RecordLinkage_0.4-1.tar.gz")
That is, install.packages can take a URL, and so you don't need to manually download, install and delete the tarball. However, you do need to manually install the dependencies.
*The original answer was written 4 years ago and now generates this error:
ERROR: dependencies ‘e1071’, ‘RSQLite’, ‘ff’, ‘ffbase’ are not available for package ‘RecordLinkage’
* removing ‘/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library/RecordLinkage’
Presumably this is because the package's dependencies have changed. I get the same error with my solution. Also note that the ffbase package is also now archived.
I have the same problem when running my CLI tools against SAP HANA database using Sap.Data.Hana.Net.v8.0.dll with .Net8.
When PublishingSingleFile=True i got error :
<IncludeNativeLibrariesForSelfExtract>true</IncludeNativeLibrariesForSelfExtract> does not help.
When PublishingSingleFile=False no problem, but i would like to keep a single exe file not hundreds on dlls...
Fixed the issue.
try:
location = pyautogui.locateOnScreen('1.png', confidence=0.9)
if pyautogui.locateOnScreen('1.png', confidence=0.9):
call_count = 0
if pyautogui.locateOnScreen('1.png', confidence=0.9):
while call_count <= 7:
logging.info(call_count)
call_count += 1
if has_restarted:
return
restarting_function(True)
restarting_function()
except pyautogui.ImageNotFoundException:
pass
Browsers use navigator.geolocation, but it's often imprecise because they don't access GPS directly. Instead, they rely on:
Wi-Fi networks (accuracy depends on external databases). Cell towers (less precise, can be off by hundreds of meters). IP address (very inaccurate, can be off by kilometers)
Because browsers don’t have direct GPS access, no JavaScript library can guarantee pinpoint accuracy. I faced this challenge in one project, and I just drew squares in my database. This doesn’t solve the problem but can be useful in some situations. You could also develop a phone app, as the phone app provides better accuracy if the user grants the permissions."
I had the same problem, but in reverse. My setup project insisted on compiling as "x86" and I couldn't make it change to "x64".
I just got the answer from Stack Overflow.
Why does the Visual Studio setup project always build as x86
Left-click on the setup project file, then look in the properties window. You can set the target platform for the installation there.
More of a workaround, but you can try installing it via conda as well.
conda install pandas
I am facing a similar problem. The issue lies where the github:repo:push action uses a helper that initializes a git repo, https://github.com/backstage/backstage/blob/master/plugins/scaffolder-backend-module-github/src/actions/github.ts#L275, and uses this helper function here: https://github.com/backstage/backstage/blob/master/plugins/scaffolder-node/src/actions/gitHelpers.ts#L49-L52
My suggestion is to follow the GitHub issue https://github.com/backstage/backstage/issues/28749 to allow an action like: github:branch:push.
Hello everyone 👋 so I recently had my Facebook account hacked, which was frustrating and disappointing. Unfortunately, Meta doesn't have a dedicated support team, despite many accounts being compromised daily. Fortunately, I managed to contact a member of the Meta recovery department, @ Rothsteincode, through X formally known as Twitter, and Gmail. [email protected] They helped me regain access to my account. I'm grateful for their assistance, but I believe Meta needs to improve their security measures and provide better support for users.
I am trying to fetch cpu % using jtopenlite, however I am getting cpu % as zero for all the jobs? Can someone pls help here?
I ended up changing my user interface to use a tab controller and searching the separate algolia indexes in their own respective tabs. My repository and notifiers are essentially the same as the posted code other than adding an 'indexName' parameter and removing one of the HitsSearchers in the search function. I'm still not sure why the code in my question doesn't work as I thought it should, but for now this update has solved my issue.
Not the best answer but at least so you can test that you converter works you can force it:
var options = new JsonSerializerOptions();
options.Converters.Add(new FooConverter());
var foo = JsonSerializer.Deserialize<Foo>(json, options);
I had the same issue and couldn't figure out what was going on. I tried all the settings mentioned here and on other posts. But it made no difference. In the end I disabled and re-enabled Pylance and that fixed it. Just in case that's of use to anyone else struggling with this.
This is a big issue today, specially for big codebase projects. Flutter lint for example is automatic for all the project, even if files are closed
was the bug really solved? I added the /bin folder to the MANIFEST but it seems like only the cannot load class problem is solved. The NullPointer Bug still exists.
Make sure to include "storage" in the manifest file specifically in the permission section:
"permissions": ["storage"]
Hope that would help.
More detail in here.
I think this will help you, yes i know its a old post, but~ for others maybe.
https://github.com/Makunia/Googleform-to-Discord-Webhook-Post-Thread/blob/main/Googleform.js
It seems there is a parser and formatter compliance between .Net framework and .NET (or .Net Core). .NET uses the standard IEEE 754-2008 it seems.
I've tried your code in .Net framework and .NET (from 3.1 onwards) it behaves as you mentioned.
The reason is already answered in the below stackoverflow question: Rounding issues .Net Core 3.1 vs. .Net Core 2.0/.Net Framework
Hope this helps!
Thank you for sharing this! I've been working on this all afternoon. Creating a new FieldIdentifier was the missing part for me to get validation to re-fire!!
I just created a simple tool using Node.Js to run locally to test opengraph data.
It has a live preview
Please find below my code, I'm getting null exception in my controller
var apiHost = $"{Request.Scheme}://{(string)Request.Headers["X-Original-Host"] ?? Request.Host.Value}";
[Fact]
public async Task Get_ShouldReturnOk_WhenXOriginalHostIsMissing()
{
// Arrange
var mockHeaderNavigationModel = new Mock<IHeaderNavigationModel>();
var mockNavigationService = new Mock<INavigationService>();
// Mock HttpRequest
var request = new Mock<HttpRequest>();
request.Setup(x => x.Scheme).Returns("https");
request.Setup(x => x.Host).Returns(HostString.FromUriComponent("localhost:5001"));
request.Setup(x => x.PathBase).Returns(PathString.FromUriComponent("/api"));
var httpContext = Mock.Of<HttpContext>(_ =>
_.Request == request.Object
);
//Controller needs a controller context
var controllerContext = new ControllerContext()
{
HttpContext = httpContext,
};
//assign context to controller
var controller = new NavigationController(mockHeaderNavigationModel.Object, mockNavigationService.Object)
{
ControllerContext = controllerContext,
};
// Mock navigation service (for example, returning some mock content)
mockNavigationService.Setup(ns => ns.GetNavigationContentAsync(It.IsAny<string>(), It.IsAny<string>()))
.ReturnsAsync(UnitTestsTestData.MockNavigationContent);
// Act
var result = await controller.Get();
// Assert
var okResult = Assert.IsType<OkObjectResult>(result); // Should return OkObjectResult
Assert.NotNull(okResult.Value); // Check that the result has content
}
On Azure AD B2C, we need to filter by mail instead. (Thanks zoke for most of this)
var graphUser = (await graphClient.Users.GetAsync(
config => config.QueryParameters.Filter = $"mail eq '{user.Email}'"))?
.Value?
.FirstOrDefault();
This works with WooCommerce 9.5.2.
$products_already_in_cart = WC()->cart->get_cart_item_quantities()[$product_id];
did you ever find out how to insert the event data?
answer by u/tetrahedral on Reddit:
My hunch is that you are passing the text back as a MutableList somewhere and calling .clear() on it without realizing that it’s a reference to the storage. Modify getText and addText to copy into a new array and see if the problem goes away.
I was creating an alias to the mutableList when I was passing it as a parameter. when I called .clear() on it, the list was deleted in both files.
I guess avoiding ! simplifies logic for reading
public boolean isBetween(LocalTime start, LocalTime end, LocalTime target) {
return start.isAfter(end) ?
(target.isAfter(start) || target.isBefore(end)) :
(target.isAfter(start) && target.isBefore(end));
}
If you don't need or actually plan to utilize R Server, then don't select the options during the installation to install R Server. You can deselect R Server during the install process, and when deselected, it will allow the SQL installation to proceed.
If you're trying to use a C# DLL in a Node.js application, the challenge is that C# runs in the .NET runtime, while Node.js runs in a JavaScript environment. You can't directly load a C# DLL in Node.js without some interop mechanism.
How to Make It Work
Here are some ways you can get this working:
Edge.js is a library that allows you to call C# code from Node.js.
Steps:
1. Install Edge.js:
npm install edge-js
2. Use it in your Node.js app:
const edge = require('edge-js');
const myDllMethod = edge.func({
assemblyFile: 'path/to/your.dll',
typeName: 'YourNamespace.YourClass',
methodName: 'YourMethod' // Must be a public static method
});
myDllMethod(null, function (error, result) {
if (error) throw error;
console.log(result);
});
Pros: Simple to set up Good for small projects
Cons: Only works with synchronous static methods Doesn't support advanced .NET features
If your DLL has dependencies or needs a runtime, it's better to expose it as an API.
Steps:
1. Create a Web API in .NET Core:
[ApiController]
[Route("api/[controller]")]
public class MyController : ControllerBase
{
[HttpGet("call")]
public IActionResult CallCSharpMethod()
{
var result = MyLibrary.MyClass.MyMethod();
return Ok(result);
}
}
Call the API from Node.js using Axios:
const axios = require('axios');
axios.get('http://localhost:5000/api/MyController/call')
.then(response => console.log(response.data))
.catch(error => console.error(error));
Pros: Works for complex logic No need to load DLLs in Node.js
Cons: Requires hosting the API
If Edge.js doesn’t work and an API is overkill, you can run a C# console app and get its output.
Steps:
Create a Console App in C#:
class Program { static void Main() { Console.WriteLine(MyLibrary.MyClass.MyMethod()); } }
Call the EXE from Node.js:
const { exec } = require('child_process');
exec('dotnet myConsoleApp.dll', (error, stdout, stderr) => {
if (error) console.error(error);
console.log(stdout);
});
Pros: No need to modify the DLL Works with any C# logic
Cons: Slower due to process execution
Which One Should You Choose?
For simple method calls → Use Edge.js
For a scalable solution → Use a .NET Web API
If Edge.js doesn’t work → Use the Console App approach
it depends! It depends on what you need.
Check out what forms are for with this awesome guides: https://angular.dev/guide/forms/reactive-forms
And for everything else, you can also use inputs without forms!
look in https://github.com/SWNRG/ASSET for such implementations. There are 3 different versions of contiki, you compile them seperately, and you utilize the nodes from each one.
I found the answer on AWS doc:
The pending-reboot status doesn't result in an automatic reboot during the next maintenance window. To apply the latest parameter changes to your DB instance, reboot the DB instance manually.
Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RebootInstance.html
This type of error usually occurs when there is a problem with the Flutter and Kotlin dependencies in your Android project. In this case, the FlutterFragmentActivity class cannot be resolved correctly.
-Review your dependencies: Make sure that the required dependencies are included in your app's build.gradle file in the dependencies section
implementation 'io.flutter:flutter_embedding_debug:<flutter_version>'
-Update Flutter and Gradle
Check your project settings on Android: Make sure your project is properly configured to work with Flutter. Open the MainActivity.kt file and verify that the import and class are correct
package com.app.myproject
import io.flutter.embedding.android.FlutterFragmentActivity
class MainActivity: FlutterFragmentActivity() { }
You can simplify AtLeastTwoElements from @jcalz answer to
type AtLeastTwoElements<K extends PropertyKey> =
{ [P in K]: Exclude<K, P> }[K]
AtLeastTwoElements<"a" | "b"> evaluates to {a: "b", b: "a"}["a" | "b"] which is "a" | "b".
But AtLeastTwoElements<"a"> is {a: never}["a"] which is never.
for git bash, add/modify like that in c:/Users/YOUR_NAME/.bash_profile:
export PATH="$HOME/AppData/Roaming/pypoetry/venv/Scripts:$PATH"
I needed to create a copy of my current python 3.11 environment but with python 3.10.
trying
conda install python=3.10
resulted in conda telling me that it couldn't figure stuff out because python_abi and thing dependent on it were the problem.
So...
steps needed:
conda create --name py-3.10 --clone py-3.11
conda activate py-3.10
conda remove python_abi <--- this was the blocker stopping me
conda install python=3.10
conda install python_abi keyboard levenshtein
When I removed python_abi it warned me that the packages keyboard and levenshtein would be removed, so the last step that adds the python_abi back had to add these back too.
It's better than having to reload ALL of your packages
If none of the answers here worked, try changing .gitignore encoding to UTF-8.
JSON Patch works exactly as your request if you change json from array to Map<ID, Object>
JSON Patch use index for array and key for map
If you get this in windows when trying to open applications in a web browser, which worked for me. windows to system settings > change date and time > turn set the time automatically setting off then on again. turn adjust for daylight savings time automatically setting off then on again. Refresh your browser and it should then log you in probably.
use the PowerShell instead of the cmd
You can also try the simpler syntax
repo.heads['feature-generate-events']
Both approaches you described are working solutions with own pros and cons. The right choice depends on your needs.
Pros:
Cons:
When to use:
Pros:
Cons:
When to use:
However, I cannot fail to mention that the approaches above only make sense for stateful apps where local data persistence.
From your description, it’s unclear whether your «simple app running on Node.js» is stateful or not. If it’s stateless, consider using Cloud Run or App Engine, depending on your specific requirements (scalable by design, minimal maintenance, and most likely much cheaper than MIG).
To achieve the desired JOLT transformation, you need to check if body.Contract.stayRestrictions[].restrictionType equals "ClosedToArrival", and if so, set body.Block.stayRestrictions[].isClosedToArrival to true; otherwise, it should be false. This ensures that the transformation accurately reflects the condition specified in the input JSON. Just like how a well-structured Texas Roadhouse menu helps diners easily navigate through different meal options, organizing JSON transformations efficiently makes data processing seamless. Whether it's structuring a menu with clear categorie
How could know wich flags are deprecated in jvm 17 for example with this command line java -XX:+PrintFlagsFinal -version?
here's my setting
capabilities: [{ // capabilities for local Appium web tests on iOS platformName: 'iOS', 'appium:deviceName': 'iPad mini (6th generation)', 'appium:platformVersion': '17.5', 'appium:automationName': 'XCUITest', 'appium:udid': '713D25D6-E4EF-4E9D-B4BE-0B43BBFBB4F6', 'appium:noReset': true, 'appium:fullReset': false, 'appium:app': join(process.cwd(), 'app/iOS-NativeDemoApp-0.1.0.app'), 'appium:bundleId': 'com.apple.xcode.dsym.org.reactjs.native.example.wdioDemoAppTests', 'appium:appWaitDuration': 90000, 'appium:appWaitActivity': 'SplashActivity, SplashActivity,OtherActivity, *, *.SplashActivity', 'appium:newCommandTimeout': 600 }],
and still face this issue
ERROR webdriver: Error: WebDriverError: The operation was aborted due to timeout when running "http://127.0.0.1:4723/session" with method "POST" and args "{"capabilities":{
anyone can help?
The answer from @kEks above is the solution to your question.
However, I'd further suggest that you are missing the point of Snakemake rules having those four rules that all contain shell loops. Is there any reason why you are making a rule that triggers once and runs every command in a loop, rather than having a rule that applies to any individual output file and letting Snakemake do the work? Your code would be considerably simpler if you wrote it this way. You could also then use the touch() output type of Snakemake to save you explicitly running the touch ... command in the shell part.
I ran it and it bricked my browser.
I found this repository, which is part of the jetpack compose course from Google, which implements the same approach:
Using wget:
for i in {1..15}; do wget --show-progress -c --limit-rate=3M "https://huggingface.co/unsloth/DeepSeek-R1-GGUF/resolve/main/DeepSeek-R1-Q8_0/DeepSeek-R1.Q8_0-0000${i}-of-00015.gguf?download=true" -o "DeepSeek-R1.Q8_0-0000${i}-of-00015.gguf"; do
did you find any solution yet? I want to implement an reward-based ad on my site too.
The problem here was the name of the parameter passed to the confirmPayment function. It should have looked like the below:
const {error} = await stripe.confirmPayment({
elements: expressElements,
clientSecret,
confirmParams: {
return_url: window.location.href,
},
redirect: 'if_required',
});
have you found any solurion? i have the same problem
I found problem. I did not added expire time for token. Because of that when I tried to send it to api it did not understood what it was.
app.config['JWT_ACCESS_TOKEN_EXPIRES'] = timedelta(days=7)
Another option why it was no working because I did not added required extension get_jwt
from flask_jwt_extended import JWTManager, create_access_token, jwt_required, get_jwt_identity, get_jwt
I got this error when I copied javafx.graphics.jar to a new project but didn't also copy all of the dlls in the bin directory of the JavaFX SDK.
Ok, I fixed this myself with a really really ugly hack.
My DuplicateFilterSink class now looks like this:
public class DuplicateFilterSink : ILogEventSink
{
private readonly ILogEventSink _innerSink;
private LogEvent? _prevLogEvent;
private int _duplicateCount = 0;
public DuplicateFilterSink(ILogEventSink innerSink)
{
_innerSink = innerSink;
}
public void Emit(LogEvent logEvent)
{
// Check if the message is the same as the previous one
if (_prevLogEvent != null && logEvent.MessageTemplate.Text == _prevLogEvent.MessageTemplate.Text)
{
_duplicateCount++;
}
else
{
// If the message has changed, log the previous duplicate count
if (_duplicateCount > 0 && !logEvent.MessageTemplate.Text.StartsWith(" * The previous message occurred"))
{
string dupmsg = $" * The previous message occurred {_duplicateCount} times";
Log.Information(dupmsg, _duplicateCount);
_duplicateCount = 0;
}
// Log the new message
_innerSink.Emit(logEvent);
_prevLogEvent = logEvent;
}
}
}
Since I have not figured out a way to create a valid MessageTemplate with a valid MessageTemplateToken, I tried using the Log.Information() line, but that created an infinite recursion loop because the Log method kept calling the Sink which called the Log method which called the... well, you get it.
I combated this problem by adding the "&& !logEvent.MessageTemplate.Text.StartsWith(..." condition to the if statement so that the second time through it would not Log method again.
This works, but is horribly Kludgy. I will be greatful if anyone can solve my original problem in a "best-practices" way.
final_array = final_array.map { college in
College(id: college.id, colData: college.colData.map { student in
Student(name: student.name.uppercased(), age: student.age)
})
}
I've got 'username' field in yaml, and value had a backslash symbol '\', it must be escaped with another backslash '\\'
Had the same problem. What solved it was Rsge's comment:
use that (aka.
py.exe) to open Python
So I changed the *.py file association to C:\Windows\py.exe
(using pythonw.exe or python.exe had no effect)
(Edit: formating)
I have similar issue, when executing simple select query text values get ", but when i use function array_agg it adds another ", so values lools like ""hello""; i this is something from new postgres version
Creating an API that returns true/false based on a hardcoded query doesn't seem such a good solution, because it isn't flexible at all and doesn't fit into REST, because it doesn't represent an action on a resource.
Sure it does - the resource is some named collection of information (aka "a document", although you can implement that any way you like behind the api facade), and the action on the resource is GET (please send me a copy of the current representation of the document).
As REST is designed to be efficient for large-grained hypermedia, you're more likely to want a design that returns a bunch of information all bundled together, rather than a document that describes a single bit, but that's something you get to decide for yourself.
How do I create an api that checks if a certain condition is met, while making it as flexible as possible and adhering to REST principles?
In that case, you're probably going to end up defining a family of resources, each of which has their own current representation (but under the covers might be implemented using a single "request handler" that knows how to choose the correct implementation from the target resource of the request).
So REST is going to tell you that you can do that (it's your own resource space) and that you'll probably want to use some readily standardizable form for describing the space of identifiers (ie: choosing identifiers that can be described using URI Templates).
As a general rule, you'll want to avoid using identifiers that couple you to tightly to a particular implementation of handler unless you are confident that the particular implementation is "permanent".
In terms of the spelling conventions you use to encode the variations of resources into your identifiers -- REST doesn't care what spelling conventions you use.
You guys can use this
export const isServer: boolean = typeof window == "undefined";
if (!isServer) {
return JSON.parse(localStorage.getItem("user") || "{}");
}
Do you solve this question?
I think I have the same problem
Without knowing more about your architecture, this would come down to whether or not you want to expose logging database to your frontend. Your options are:
By using Rum, you can choose to store only Frontend errors in the frontend log storage, and reserve the backend log storage for server side issues.
One option is to do it in one line:
Sheets("Sheet1").Range("A1:Z9").copy Sheets("Sheet2").Range("A1")
In general, working with the Active Sheet (using Range("a1") without specifying the sheet) or using the active Selection can lead to problems since the starting state is not known or controlled.
You need to remove the slash from the end of the path on the right side of the map. i.e -v C:\folder:D:
Here it is example how to expose Flask app as a single Cloud Function
https://firebase.google.com/docs/functions/http-events?gen=2nd
import { ECharts } from 'echarts/core';
echartsInstance: ECharts;
documentation : https://echarts.apache.org/en/api.html#echarts
I recommend discarding nextjs-progressbar since it is no longer maintained and seamlessly switching to next-nprogress-bar. The usage remains exactly the same - you only need to update your imports.
For newer versions of Jenkins, the required Maven configuration can be found under Dashboard > Manage Jenkins > Tools. In the Maven section, select the desired version and ensure that the name you assign to the Maven version matches exactly with the name in the tool section. The name must be an exact match.
Can you share your top level CMakeLists.txt where you call add_subdirectory() for the SDL libraries? I don't know for sure, but I suspect that targets from those SDL_* subdirectories are clashing with eachother.
This has a more complete sample to build just about what you're describing: https://github.com/Ravbug/sdl3-sample/blob/main/CMakeLists.txt#L65
SDL2-image build also outputs this somewhere in the process:
To this problem, that's happening here: https://github.com/libsdl-org/SDL_image/blob/release-2.8.x/cmake/FindSDL2main.cmake#L6
I think you can fix that by setting SDL2_DIR variable to point to your build directory.
Also I think its finding libSDL2main.a in /usr/local/lib because that's where cmake installs built libraries by default, so I suspect this SDL subdirectory is getting built then installed there.
This line prevents that from happening
set(CMAKE_INSTALL_PREFIX "${CMAKE_BINARY_DIR}" CACHE INTERNAL "")
Also from that sample https://github.com/Ravbug/sdl3-sample/blob/main/CMakeLists.txt#L8-L9
Turns out that there were a few possible solutions to my problem. The easiest was to let puppeteer start the server by adding the server command to the jest-puppeteer.config.js file. I also added a command flag (--puppeteer) as described in the stackoverflow question:
server: {
command: "node server.js --puppeteer",
port: 5000,
launchTimeout: 10000,
debug: true,
},
I then added a check in the server.js file for the flag so that the server is not started for any unit tests other than those using puppeteer:
if (process.env['NODE_ENV'] === 'test'
&& !process.argv.slice(2).includes('--puppeteer')) {
module.exports = {
'app': app,
};
} else {
http.listen(port);
}
I also put my react (puppeteer) tests in a separate file just for good measure.
jest.config.js:
module.exports = {
preset: "jest-puppeteer",
testMatch: [
"**/__tests__/test.js",
"**/__tests__/testReactClient.js"
],
verbose: true
}
Feel free to comment if you have a better solution.
You’ll want to use TimelineView for this. .everyMinute updates on the minute so you can update your clock
TimelineView(.everyMinute) { timelineContext in
let minute = Calendar.current.component(.minute, from: timelineContext.date)
…
}
There is now an option to click "Stage block", which can be more convenient than selecting lines and using the "Stage Selected Ranges" command:
Yes, this behavior is intentional and is due to Canvas' privacy settings and permissions model.
Privacy Settings in Canvas:
Canvas hides user email addresses by default unless the requesting user has the correct permissions. Admin users have broader access, so they can see emails, but regular users do not necessarily have permission to view their own or others' email addresses via the API.
Account-Level Settings:
Your Canvas instance may have settings that restrict email visibility for non-admin users. For example, Canvas administrators can configure whether email addresses are visible through the API under:
Admin > Settings > Security (or similar)
Scope of OAuth Tokens:
Even though you have disabled enforce scopes, Canvas still applies certain internal privacy rules. The email field might require additional permissions, such as Users - manage login details.
User Visibility & Role Permissions:
The visibility of user emails may also depend on the specific role settings under:
Admin > Roles & Permissions
Look for permissions related to "Users" or "Profile" and check if there are any restrictions on email visibility.
Check if an Additional Scope is Needed:
Some API fields are restricted unless the OAuth token includes additional permissions. Try explicitly requesting email access in the OAuth scope.
Try Querying profile Endpoint:
Instead of /api/v1/users/self, try querying:
GET /api/v1/users/self/profile
This endpoint sometimes includes the email address, depending on permissions.
Check public Attribute in User Settings:
Each user can set their email visibility under their Canvas profile settings. If emails are set to private, they may not appear in API responses.
Use Admin API Token (If Allowed):
If your app needs email addresses, you might need an admin API token or configure a service account with broader access.
Even with global developer key access, Canvas enforces privacy policies for regular users. You may need to adjust role permissions or request emails via a different endpoint (/profile) or with additional API scopes.
Would you like help testing specific API calls or adjusting Canvas settings?
BeeGFS is an example where df -l returns 0.
I set the constraint to Not Enforced in project2 and I was able to execute the command successfully.
This answer works for us, but in our case, we have to change the constraint to 'Not Enforced' in project1.
you can change the name of one variable with assignin
assignin ('base', 'new_name_variable', old_name_variable)
I know this is super old but if anyone comes across this, once your files that you are reading get to big you will need to setup something else. I have 4 files I am trying to read into my test all over 30,000 lines so reading them into the 1000 vus that runs my test crashes the test on the init stage
I've look at all your suggestions ... And it more simple than that ... Basically sqlite3 has a text field that NEED TO BE DEFINED BY IT LENGTH ... without this the field is so long , DBgris need to adapt and opt for a memo file the display ...
De fine the field length and the problem is solved... 👍👍👍👍👍
jinsoul
SCDF by default uses Java 8 when running apps. You can adjust this by setting BP_JVM_VERSION to -jdk17 (e.g...)
export BP_JVM_VERSION=-jdk17
Can you tell me what changes you did to get it working? I am stuck in exactly same scenario.
The solution, at least in this case, turned out to be having to place a new control on the Design screen rather than typing it in. I still have no idea what the difference is, or how the control on the UI and the control in the codebehind are connected, since there is no designer file; if I simply copy the modified .aspx and .aspx.cs files, and nothing else, to a different copy of the site, it works.
If you want to extract a date from a GUID you must use UUID V1 to do so. It's not possible with V4.
Thanks for your help on this. I'm having the same issue as the poster and although I try to understand how to change my formulae to add Blank()" not null I'm struggling. I'm not an advanced user. Here is my formulae: Sort(ForAll(Distinct(Table1,SVP), {Result: ThisRecord.Value}),Result). This is for the first drop down box that shows only UNIQUE names. This formulae keeps giving me this warning that is driving me insane: "Error when trying to retrieve data from the network: Expression "SVP eq null" is not supported. clientRequestId:"
In the Yaml file, You can add a condition to the task that updates the build number to check if a variable is set to false. If the variable is false, the task will be skipped.
gridOptions={{
rowHeight: 10
}}
In my case removing rowHeight solved the problem