I have the same issue. Datasets are present but they are empty. Did you solve this?
I tried many solutions but not work except this one:
Flutter 3.3.8 + Xcode 16
https://github.com/flutter/flutter/issues/155497#issuecomment-2437099277
Web proxy softwares may convert the case of the cookie name.
Its behavior is permitted at least in the older RFCs.
Then many web libraries/frameworks handle cookie names as case-insensitive.
We should do the same, unless you expect all requests are encrypted and proxies can't modify it.
Errors related to upload are logged in console and you can see the same with exact error. Common error is invalid UTF-8 characters.
In my case it just worked by resetting the import export settings to default.
Go to Tools options.
Then go to Import & Export Settings.
And Select option Reset all settings.
Next --> Next
Finish.
Done
I did something similar on my website as shown below:

here is the code to achieve it:
CSS-
.imgcontainer {
position: relative;
margin-left: 38%;
justify-content: center;
align-items: center;
max-width: 25%;
border-top: 1px solid rgba(255, 49, 49, 0.5);
border-right: 1px solid rgba(0, 255, 255, 0.5);
border-bottom: 1px solid rgba(57, 255, 20, 0.5);
border-left: 1px solid rgba(40, 129, 192, 0.5);
}
.tagimage {
max-width: 100%;
max-height: 80%;
}
JSX-
<Heading title='Our Vision' />
<div className="imgcontainer" style={{alignItems:"center"}} >
<img className="tagimage" src={tagline} alt='' />
</div>
.Site.Data.verbs .File.BaseFileName
verbs is like your folder name
Sample
/data
- verbs
-- file-1.yaml
-- file-2.yaml
Here are two steps to solve if you are hosting with iis server then using the Blazor server app
1> Go to Windows and serach the enter image description her (Turn windows features on or off)
2>Then select the
-> internet information services
-> World wide web services
-> Application Development Features
- WebSocket Protocol (Turn on)
if you are using the ubuntu then go to set this setting like this
location /_blazor {
proxy_pass http://localhost:5002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}
Check that your database have alembic_revision table.
If it's not present run:
alembic upgrade head
Then run autogenerate command:
alembic revision --autogenerate -m "Added initial table"
why
Because it means you get faster access. If each chips can process X number of reads and Y number of writes per second, in your hypothetical 8 chips stick, the controller now can process 8X number of reads and 8Y number of writes per second.
what if I want to get 64 bits that are stored in a single chip
You likely won't. Interleaving means every operation is spread evenly among all chips, and since it's supposed to be transparent (ie, the OS and apps generally won't care if a controller have 2 or 4 or 8 chips), you'd go out of your way to write & read something that end up on a single chip, in the process you'd be writing and reading the rest of the chips anyway. A normal operation of storing or reading something will end up using all chips, very quickly, without you having to care about the details.
You have two options:
Option 1: Upgrade Gradle (Recommended)
Since Java 21 is installed, update Gradle to a compatible version (8.5 or later).
Open your android/gradle/wrapper/gradle-wrapper.properties file.
Change this line:
distributionUrl=https://services.gradle.org/distributions/gradle-7.6.3-all.zip
To:
distributionUrl=https://services.gradle.org/distributions/gradle-8.5-all.zip
Open android/build.gradle and update:
classpath 'com.android.tools.build:gradle:8.1.0'
To:
classpath 'com.android.tools.build:gradle:8.3.0'
Clean and rebuild the project:
flutter clean flutter pub get
Option 2: Downgrade Java
If you want to keep Gradle 7.6.3, downgrade Java to version 17:
Install Java 17
Restart the terminal and run:
flutter doctor --verbose
I was facing a similar issue checked the db credentials and the host url all was fine but was still getting the same issue . Re-running the application, after adding the below properties worked in my case.
spring.flyway.locations=classpath:db/migration
spring.flyway.baselineOnMigrate = true
Your question already answered in another post. You can get many ways to do it on this post Make scrollbars only visible when a Div is hovered over?
Nginx is listening on port 80 and 443. I assume that odoo is listening on port 80 as well. That will cause the problem. However I would need more information about your network to be sure. If you have only odoo running at IP2, you don't need nginx. Just make sure odoo listens on port 80 and 443, preferably 443.
If you're running the project by pressing "F5" or clicking the green triangle, try running it with "Ctrl + F5" instead. I faced the same issue, and this solution worked for me.
if your project is next.js. you can add the follow code block to your next.config.mjs
eslint: {
ignoreDuringBuilds: true,
},
reference:
https://nextjs.org/docs/app/api-reference/config/next-config-js/eslint
I don't understand what is wrong with serverside rendering. Wont it just fully reload the page as window.location.reload do ?
- Can you help me how to create these proxies on Routeros (v7.18.2) by using riftbit/3proxy?
Thanks.
I hope you mean Visual Studio Code... if not your using the wrong IDE. If so, you should just be able to run 'npm run dev' inside the console to run it.
Uh, like is this still an issue 3 years later?
You may use the concurrency version
Task {
do {
try await CXCallDirectoryManager.sharedInstance.openSettings()
} catch {
// handle the error
}
}
tested on iOS 18.4 and it's working perfectly
That's the version issue of pytorch that I found.
You must downgrade pytorch version.
Below version is good to me.
python version 3.11
and
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
If using Eclipse 2025, this may happen after you have not opened your project after quite some time.
The hint is that you don't see Maven Dependencies in Java Explorer.
Just right-click and do Maven Update on your project.
Assume you have auto compile turned on, republish your artifact to tomcat.
Adding the below to my package JSON script solve the error
"tsc": "tsc ./src/index.ts --outDir ./dist"
I figured it out. Thanks for those that would have helped. Had the move the 2nd ) at the end before WINNER.
=IF(OR(AND(E12="<",H12="<", K12="<",N12="<",Q12="<"), AND(E12=">",H12=">", K12=">",N12=">",Q12=">")),"WINNER","")
You can see Project/Packages under your application.You must changed this to Project Files.When you changed this to Project Files you can easily create new folder in the folder without problem
You can consider this library react-native-inner-shadow
Recently, I faced with the same issue for all events (Purchase Event from the Server are not Deduplicated in Facebook Event Manager).
like
Addtocart Event from the Server are not Deduplicated
InitiateCheckout Event from the Server are not Deduplicated
Purchase Event from the Server are not Deduplicated
ViewContent Event from the Server are not Deduplicated
Then I found a blog related to this issue and its solution, offering actionable steps you can try:
https://orichi.info/2025/03/17/event-from-the-server-are-not-deduplicated/
It is not possible if the tasks are scheduled individually i.e., tasks will run individually as per their schedule on the warehouse.
You could create the task dependency using task graphs.
A task graph is a series of tasks composed of a root task and child tasks, organized by their dependencies. Each task can depend on multiple other tasks and won’t run until they all complete.
Please review the below documentation for more information:
GWT isn't really opinionated about the layout.
You are using UiBinder already, which can inject GWT widgets into your HTML.
Your task is to produce HTML and CSS (in UiBinder, for example) that gives the responsive layout that you are looking for, whether it be mobile or desktop.
An example of a CSS and javascript library for doing a responsive layout is Bootstrap .
Convert the input values and deadline to time.Time values and compare the time.Time values with the Time.Before and Time.After methods.
time.Time values represent an instant in time and can be compared without considering the time's location.
This command format works:
curl -s -X GET "https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules?configuration.target=ip&configuration.value=$IP" -H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY" -H "Content-Type: application/json"
You shouldn't have any issues moving from 4.x to 5.x. Type CodeEffects in NuGet search and reference proper 5.x assemblies in your project. Test it. Report any issues to https://codeeffects.com/Support/
ActiveXObject is very Microsoft-specific and only available in older, pre-Edge browsers. If it is, it's disabled by default due to security concerns. Imagine you load a website and it executes some command on the command line (e.g., delete all files).
I finally fixed mine! The problem was that after uninstalling npm, you should run this command.
npm cache clean --force
And then after that, you should reinstall the npm via
npm install
This worked combined with all the suggestions above for the variable path.
In the Integration Dataset for "DelimitedText" file you could configure de following Encoding property:
Encoding: ISO-8859-1
This will solve the issue
You can't convert a type to an int.
Int is a type in python. Please refer to this article:
https://www.w3schools.com/python/python_datatypes.asp
Types are different variations of data, like String, Integer, Dictionary, etc.
If you try to mutate a type into an int you are going to encounter errors since it's like turning the enveloping feature into a subset.
Please be more clear on what the actual scenario is and we can help you. Giving us just the error without context won't help.
in PHP
$arabic_regex = '/[\x{0600}-\x{06FF}|\x{0750}-\x{077f}|\x{fb50}-\x{fbc1}|\x{fbd3}-\x{fd3f}|\x{fd50}-\x{fd8f}|\x{fd92}-\x{fdc7}|\x{fe70}-\x{fefc}|\x{FDF0}-\x{FDFD}]/u';
MaterialStateProperty.all<Color>(Colors.green) is deprecated and shouldn't be used anymore.
In new versions, prefer to use WidgetStateProperty.all<Color>(Colors.green)
Before:
The guide shows us the following image
You can consult the network of private railway guides for more information. railway guide private network
SOLUTIONS
1. - Once you've finished creating your app, go to Settings, find Deploy, and type the commands you want to run into one of the two options, as shown in the image below.
2.- You could use the DATABASE_PUBLIC_URL credentials, where you have the data to connect outside the internal network.
There are two ways to solve this problem
Using PositiveLookBehind and Negative lookahead (answered by Wiktor)
Using Capture Groups which I discuss here.
Consider the regex below
(?:^\s*DEBUG\s+)*(.+)
with test data
DEBUG
How are you1?DEBUG
How are you2?
How are you3?
_DEBUG How are you4?
I want to filter out the string DEBUG from the beginning if present. We created two groups.
Group-1: It is a non-capturing group (?:^\s*DEBUG\s+)* to capture string DEBUG from the beginning. This group is optional as it has * at the end.
Group-2: It is a capturing group that captures everything except what was captured in Group-1 using (.+)
The regex below shows a black rectangle around part of the regex that comes in Group-2
The highlighted match are shown below
The c# code to get those value is below
string pattern = @"(?:^\s*DEBUG\s+)*(.+)";
Regex regex = new Regex(pattern);
var testInput=@"DEBUG How are you1?
DEBUG How are you2?
How are you3?
_DEBUG How are you4?";
var result = Regex.Matches(testInput,pattern,RegexOptions.Multiline);
result.Cast<Match>()
.Select(m => m.Groups[1].Value)
.ToList()
.Dump();
Output:
I was struggling with this issue and tried many solutions. After exhausting efforts, I finally found a working solution.
You can find the solution in my answer to the question on Stack Overflow at this link:
you can look at the datapoints below system.adapter.
this datapoints are only visible in the expert mode.
ok, I think I just figured out what the issue was. I'm opening "example.txt" twice in main(). When I removed the second line, the results started being written to the file.
Removing
outputFile.open("example.txt");
led to the results being written to the file.
The issue was fixed after I restarted my PC 🤔
libjpeg-turbo supports multiple precisions in the one build
Slightly different than what you asked for, but have you tried to combine all dfs and doing a faceted plot?
library(ggplot2)
library(patchwork)
# Create different datasets for each plot
df1 <- expand.grid(x = seq(300, 800, length.out = 50), y = seq(300, 600, length.out = 50))
df1$z <- with(df1, dnorm(x, mean = 500, sd = 50) * dnorm(y, mean = 400, sd = 50))
df2 <- expand.grid(x = seq(300, 800, length.out = 50), y = seq(300, 600, length.out = 50))
df2$z <- with(df2, dnorm(x, mean = 600, sd = 50) * dnorm(y, mean = 450, sd = 50))
df3 <- expand.grid(x = seq(300, 800, length.out = 50), y = seq(300, 600, length.out = 50))
df3$z <- with(df3, dnorm(x, mean = 550, sd = 50) * dnorm(y, mean = 500, sd = 50))
df4 <- expand.grid(x = seq(300, 800, length.out = 50), y = seq(300, 600, length.out = 50))
df4$z <- with(df4, dnorm(x, mean = 650, sd = 50) * dnorm(y, mean = 350, sd = 50))
# Compute global min and max for z-values across all datasets
min_z <- min(c(df1$z, df2$z, df3$z, df4$z), na.rm = TRUE)
max_z <- max(c(df1$z, df2$z, df3$z, df4$z), na.rm = TRUE)
df.grouped <- dplyr::bind_rows(list(df1=df1, df2=df2, df3=df3, df4=df4), .id = 'source')
head(df.grouped)
ggplot(df.grouped, aes(x, y, fill = z)) +
geom_raster() +
scale_fill_viridis_c(limits = c(min_z, max_z)) +
labs(y = "Excitation Wavelength / nm",
x = "Emission Wavelength / nm") +
facet_wrap(~source, scales = "free")+
theme_classic()+
theme(strip.text = element_blank())
I think the answer is actually in your initial posting.
"there is not even a folder like 'temp' anywhere in 'azerothcore' ..."
In worldserver.conf, there is a line whos default is "".
TempDir = ""
I suspect you actually had it set.
I did. I was doing a full reinstall and got this error. I found that line in worldserver.conf, added the directory, and off it went.
Yes, you should awlays wrap a DB::transaction in try..catch if you want to catch the exception.
DB::transaction will only handle database rollback, but it will not do any exception handling.
I've set scrollEnabled={false}, which helped me with the same situation.
Git-flow is an alternative Git branching model. Git-flow has numerous, longer-lived branches and larger commits. Under this model, developers create a feature branch and delay merging it to the main trunk branch until the feature is complete.
You can learn more from the Microsoft Fabric Git integration documentation.
Expo-cli has been deprecated for a long time by now, and to receive the new Expo CLI, just simply run npm install expo, yarn add expo (if using Yarn), and Expo CLI is preinstalled in there. You can refer to this for more info Expo Docs on Expo CLI. I also recommend updating node.js to a version like 21.5 or newer, as it can really increase performance. I hope you will enjoy the new Expo Cli. Also, if you have already installed the new Expo Cli, do NOT use any commands starting with expo, as the is referring to the legacy cli and will pop up that error. Later, it would redirect to npx expo start, but the error is still there. To get rid of the error, start your expo commands like npx expo {command}, and the error should be gone.
Currently, Azure Tables only accept a limited set of field types, and nested JSON or array are not one of them. As Dasari Kamali posted, you can convert your nested JSON or arrays to a string and store them in a field.
Source: https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-the-table-service-data-model#property-types
Do you mind if i request help from you on this same issue.
The issue comes down to how MongoDB stores and indexes dates. Your ts field is stored as an ISODate (a proper BSON date), but your Java query is treating it as a numeric value (epoch milliseconds). This means the index on ts (which expects a Date type) is ignored, forcing MongoDB to do a COLLSCAN instead of an IXSCAN.
Your Java query converts timestamps to toEpochMilli(), which results in a Long value (e.g., 1733852133000).
MongoDB’s index on ts is built on BSON Date objects, not raw numbers.
When you query with a Long instead of a Date, MongoDB sees a type mismatch and ignores the index, defaulting to a full collection scan.
Date Instead of LongYou need to ensure that your Java query uses Date objects instead of epoch milliseconds. Here’s the correct way to do it:
java
Copy code
ZonedDateTimelowerBound = ...; ZonedDateTime upperBound = ...; Date lowerDate = Date.from(lowerBound.toInstant()); Date upperDate = Date.from(upperBound.toInstant()); var query = Query.query(new Criteria().andOperator( Criteria.where("ts").gte(lowerDate), Criteria.where("ts").lt(upperDate) )); var result = mongoTemplate.find(query, Events.class);
Date.from(lowerBound.toInstant()) ensures that you’re passing a proper Date object that MongoDB’s index can recognize.
The MongoDB query now correctly translates to:
json
Copy code
{ "ts": { "$gte": ISODate("2025-01-01T01:00:00Z"), "$lt": ISODate("2025-01-02T01:00:00Z") } }
instead of:
json
Copy code
{ "ts": { "$gte": 1733852133000, "$lt": 1733853933000 } }
This allows MongoDB to use the index properly, resulting in IXSCAN instead of COLLSCAN.
Convert ZonedDateTime to Date before querying. Using raw epoch milliseconds (long) prevents the index from being used, leading to slow queries.
well, I don't know how express.js works, but from what I'm seeing there are two things:
— first: you should add a middleware to return a session with a cookie.
— second: I don't see any cookies being sent or saved in the api/login.
You can also try ImportJSON() in google sheet to capture data in just one section. It does allow you to add URL and filters. this is one of the simplest method with 0 effort, I feel
example:
IMPORTJSON("https://restcountries.eu/rest/v2/", "/name", A1:D2)
I am trying to do the drive ownership transfer using API from the suspended user account to the manager email using a workflow automation tool called n8n and I am getting the error code 403 no matter.
test Use case:
Allowed the following scopes:
JSON Body:
{
"newOwnerUserId": "{{ $json.id }}",
"oldOwnerUserId": "{{ $json.id }}",
"applicationDataTransfers": [
{
"applicationTransferParams": [
{
"key": "PRIVACY_LEVEL",
"value": ["SHARED", "PRIVATE"]
}
],
"applicationId": ["553547912911"]
}
]
}
Error:
{
"errorMessage": "Forbidden - perhaps check your credentials?",
"errorDescription": "Request had insufficient authentication scopes.",
"errorDetails": {
"rawErrorMessage": [
"403 - \"{\\n \\\"error\\\": {\\n \\\"code\\\": 403,\\n \\\"message\\\": \\\"Request had insufficient authentication scopes.\\\",\\n \\\"errors\\\": [\\n {\\n \\\"message\\\": \\\"Insufficient Permission\\\",\\n \\\"domain\\\": \\\"global\\\",\\n \\\"reason\\\": \\\"insufficientPermissions\\\"\\n }\\n ],\\n \\\"status\\\": \\\"PERMISSION_DENIED\\\",\\n \\\"details\\\": [\\n {\\n \\\"@type\\\": \\\"type.googleapis.com/google.rpc.ErrorInfo\\\",\\n \\\"reason\\\": \\\"ACCESS_TOKEN_SCOPE_INSUFFICIENT\\\",\\n \\\"domain\\\": \\\"googleapis.com\\\",\\n \\\"metadata\\\": {\\n \\\"service\\\": \\\"admin.googleapis.com\\\",\\n \\\"method\\\": \\\"ccc.hosted.frontend.datatransfer.v1.DatatransferTransfers.Insert\\\"\\n }\\n }\\n ]\\n }\\n}\\n\""
],
"httpCode": "403"
},
"n8nDetails": {
"nodeName": "HTTP Request3",
"nodeType": "n8n-nodes-base.httpRequest",
"nodeVersion": 4.2,
"itemIndex": 0,
"time": "2/28/2025, 11:32:54 AM",
"n8nVersion": "1.66.0 (Self Hosted)",
"binaryDataMode": "default",
"stackTrace": [
"NodeApiError: Forbidden - perhaps check your credentials?",
" at Object.requestWithAuthentication (/usr/lib/node_modules/n8n/node_modules/n8n-core/src/NodeExecuteFunctions.ts:2000:10)",
" at processTicksAndRejections (node:internal/process/task_queues:95:5)",
" at Object.requestWithAuthentication (/usr/lib/node_modules/n8n/node_modules/n8n-core/src/NodeExecuteFunctions.ts:3302:11)"
]
}
}
Images attached to the case show the error message and Client ID, Service Account used also and Drive API scopes currently used.
Look forward to your assistance with the correct scope for Drive.
I understand you are trying to test E2E.
If you are using bean of the client directly, It needs to be separate server and client. Then you need to bind actual port.
In this case, the following guide would be best as i know.
https://gist.github.com/silkentrance/b5adb52d943555671a44e88356c889f8
Or you can just fix boot port but parallel test case execution is prohibited.
If you do not need to use the client interface directly, You can just MVC or REST client test for your server instead of feign client. or controller execution directly. You are losing good point of feign but do not need to consider the boot process for port binding during test.
So I figured out what I was doing wrong.
I was using .GetType() when I needed to be using .GetClass()
I spent an hour searching and minutes after posting this I found a thread with the answer.
https://discussions.unity.com/t/add-script-component-to-game-object-from-c/590211 Heres the link I suppose
Thank you for explaining how to build the required hash table.
I been missing that point.
Hope this will help someone.
It is likely that the Get-Configuration an the code you are suggesting originate from the Dev Blog :
https://devblogs.microsoft.com/scripting/use-powershell-to-work-with-any-ini-file/
Where everything is explained in great detail.
I definitely borrowed inspiration from @torek's answer. But here, I put it into action:
Start submodule
git submodule add https://github.com/dipy/dipy.git
At this point, you will see the .gitmodule file:
[submodule "dipy"]
path = dipy
url = https://github.com/dipy/dipy.git
Adjust submodule hash:
cd dipy
git checkout 1.1.1
#confirm that you got the desired git tag
git log -1
Add the tag to main repository's git tree:
cd ../
git add dipy
git commit -m 'add dipy==1.1.1 to submodule'
Next time clone your repository and you will get the desired tag in submodule:
git clone --recurse-submodules [email protected]:pnlbwh/dMRIharmonization.git
cd dMRIharmonization/dipy
#confirm that you got the desired submodule tag
git log -1
The response you're seeing is not an empty JSON object but rather a Response object from the Fetch API or a similar HTTP request. This object contains metadata about the request, such as the URL, status code, headers, etc., but it does not directly contain the response body as a JSON object unless you explicitly parse it.
This is not a bug. It is just bad labeling.
"Speakerphone" is the MICROPHONE at the bottom of the handset and "Headset earpiece" is the MICROPHONE at the top (near where your ear goes). These only exist on phones that can record in stereo natively but the bugs are:
Didn't come to stack overflow until just now. I refactored some code out of another project that was getting unwieldy.
Consider this problem space being well on its way to being solved.
https://dev.to/dmidlo/the-problem-powershells-hashing-illusion-74p
From the context you provided, it looks like you might be using unary pulling for your subscribers. In general, we recommend using the high-level client library instead.
With unary pull, Cloud Pub/Sub may return fewer messages than the maxMessages value you specify in your requests. You can verify if your requests are pulling the maximum number of messages by comparing the Sent message count and the Pull request count metrics. You should also make sure that you are not setting returnImmediately to True.
How to write degree celcius symbol as superscript in .net maui.
Suppose we are using <Span> or <Label> to display text. We have to show a value and the degree celcius symbol (as superscript) how can we achieve it?
protobuf generates a "clear" method. For example: clearXyz()
At least with chromium browsers as of March 2025, I've personally seen messages get processed in a completely different order than which they were sent. So between my own experience and the answer posted here it seems that you can't rely on messages being processed in the order they're sent.
does python hello world example provided by Cloud Code works in debug mode at all ? Not regular run .. run in debug mode .?
I finally found. Istock the tabs in a state management store and I stored in the tabs items the component, but ngrx really don't like complex objects
CockroachDB handles high concurrency through optimistic concurrency control (OCC) and multi-version concurrency control (MVCC), which allows multiple transactions to proceed without locking resources prematurely. Conflicts are detected during transaction commits based on timestamp ordering, and CockroachDB automatically retries conflicting transactions to maintain serializable isolation. These built-in mechanisms help mitigate contention, but heavy concurrent writes to the same data can still cause conflicts and performance degradation.
To further reduce transaction contention and improve throughput, you can optimize your schema and indexing strategies. Using UUIDs rather than sequential IDs as primary keys prevents data hotspots and evenly distributes writes. Additionally, keeping transactions short, batching operations, and explicitly handling transaction retries in the application layer can greatly enhance performance. Strategic partitioning, hash-sharded indexes, and adjusting key CockroachDB configuration parameters can also help spread workload evenly across your cluster, minimizing contention.
After trying the recommendation from Michael, to change bdb_reserved_sectors: db 1 to bdb_reserved_sectors: dw 1, the issue was resolved! Thank you for fixing my oversight, hopefully this won't happen again.
Python Docstring Generator can't do that, but you can use alternatives like TabNine, GitHub Copilot, and BlackBox AI. These are AI-powered extensions that provide autocompletion, helping you with both coding and writing docstrings better and faster.
when using Admin-LTE 3.0 with AngularJS, for unknown reason catching the event expanded.lte.cardwidget failed. As an alternative, I have to add ng-click to the button
<button type="button" class="btn btn-tool"
data-card-widget="collapse"
ng-click="bbtest($event)"><i class="fas fa-minus"></i>
</button>
and bbtest function is listed below:
bbtest(eve) {
let box = $(eve.target).first();
if (!box.hasClass('fa-minus')) {//expanding
setTimeout(() => {//expansion has animation
console.log(`doing sth`);
}, 500);
}
}
Try making a new django app, for your custom admin, and use any template which you want to use.
You can find some videos about it and you can find it in django documentation.
I ran into a similar issue with Clerk in development locally. I have it set up for multiple projects, but when I switch between projects the servers are set up to run on the same port. This causes an issue because of the cookies that are stored by Clerk. Try deleting your cookies for localhost and re-running the app.
I have tried to use "_x0020_" and it does not work for me. I am spanish and I am using excel and sharepoin in spanish
Try replacing the whole document view with a custom view controller that you write. You can add what ever you want in the new view.
I tried the approach shown and was unable to make this work. I'm in an azure function, so the test function is invoked through an API call, but the rest is the same. The log data.
[2025-03-20T20:24:45.472Z] Acquiring access token with ClientSecretCredential...
[2025-03-20T20:24:46.077Z] Access token acquired successfully.
[2025-03-20T20:24:46.168Z] Opening connection to XMLA endpoint...
[2025-03-20T20:24:46.371Z] Error: When interactive authentication is not supported, an external access-token is required; either provide it in the connection-string or by setting the AccessToken property.
The call to connection.SessionID = accessToken; throws the error. I've tried for a day to find a way around this, but all the documentation and CoPilot keep running me in circles. Any idea how to fix this? I haven't found a combination of access methods with my service principal that allows me to establish a connection to the endpoint. I can connect via SSMS. Here is my entire function:
class TestXMLA
{
private readonly ILogger<TestXMLA> _logger;
public TestXMLA(ILogger<TestXMLA> logger)
{
_logger = logger;
}
// Inside the TestXMLA class
[Function("TestXMLA")]
public async Task<HttpResponseData> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
// Extract service principal details from the request body
string tenantId = data?.tenantId;
string workspaceId = data?.workspaceId;
string datasetName = data?.datasetName;
string clientId = data?.clientId;
string clientSecret = data?.clientSecret;
string scope = "https://analysis.windows.net/powerbi/api/.default";
string XMLAEndpoint = $"powerbi://api.powerbi.com/v1.0/myorg/{workspaceId}";
if (string.IsNullOrEmpty(tenantId) || string.IsNullOrEmpty(workspaceId) || string.IsNullOrEmpty(datasetName))
{
var response = req.CreateResponse(HttpStatusCode.BadRequest);
await response.WriteStringAsync("Missing tenant ID, workspace ID, or dataset name.");
return response;
}
try
{
// Step 1: Acquire access token using ClientSecretCredential (Service Principal)
_logger.LogInformation("Acquiring access token with ClientSecretCredential...");
var credential = new ClientSecretCredential(tenantId, clientId, clientSecret);
var tokenRequestContext = new TokenRequestContext(new[] { scope });
var accessToken = (await credential.GetTokenAsync(tokenRequestContext)).Token;
_logger.LogInformation("Access token acquired successfully.");
// Step 2: Create connection string (without token in connection string)
string connectionString = $"Data Source={XMLAEndpoint};";
// Step 3: Open ADOMD connection
using (AdomdConnection connection = new AdomdConnection(connectionString))
{
// Apply the access token manually using SessionID
connection.SessionID = accessToken;
Console.WriteLine("Opening connection to XMLA endpoint...");
connection.Open();
// Step 4: Execute DAX Query
string query = "EVALUATE TOPN(500, Invoices)";
using (AdomdCommand command = new AdomdCommand(query, connection))
{
_logger.LogInformation("Executing query...");
using (AdomdDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
_logger.LogInformation(reader[0].ToString()); // Print first column (adjust as needed)
}
}
}
connection.Close();
_logger.LogInformation ("Connection closed successfully.");
var response = req.CreateResponse(HttpStatusCode.OK);
return response;
}
}
catch (Exception ex)
{
_logger.LogError($"Error: {ex.Message}");
var response = req.CreateResponse(HttpStatusCode.InternalServerError);
await response.WriteStringAsync($"Error: {ex.Message}");
return response;
}
}
}
This is definitely still happening, I had success when I eliminated the datetime x when plotting the bars... then relabeled the xticks using a datetime sequence afterwards:
x_labels = np.arange(np.datetime64('start'), np.datetime64('end'), np.timedelta64(1,'M')
x = range(0,len(x_labels))
ax.bar(x, df.distance, width = 0.95)
ax.set_xticks(ticks=x, labels=x_labels, rotation=90)
In the IO_init function, when I thought I was assigning to the field of the IO object pointer, I was actually "assigning" to the alloca pointer to that pointer. Later when I tried to read from the IO object field the data wasn't there.
You need to take a development build using EAS. Run the following command to create a development build and try running your project again:
eas build --profile development --platform android
or
eas build --profile development --platform ios
After the build is complete, install the generated build on your device or emulator and run:
npx expo start --dev-client
This will ensure all dependencies are properly linked and avoid module resolution issues.
:root : is a global pseudo class, it has the highest specificity and use cases, you store vars you'd like to reuse.
:html : has less specificity compare to root, it's also use for settiing default stylings etc, the most uimportant use cases i think it is useful for is font size to allow the usage of rem and em
Adding another method to those mentioned: Assuming you have VS Code launched,
Ctrl + Shift + P, then searching > Developer: Open Webview Developer Tools takes you to a Chromium browser Developer Tools window, where you'll have access to the Console, Network, Storage, as well as other useful browser tools
here is how urls supposed to look
https://steamcommunity.com/inventory/76561199445265994/730/2?l=english&count=75
so simply do count query
Please refer to the Google Provider Configuration Reference documentation, it mentions many ways to authenticate including OAuth.
[1] https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
Unfortunately Project does not expose formatting characteristics like Excel. There are two basic approaches. One is to identify the criteria that was used to set the font or cell background color. This approach can be used with a task object but will be of no use if Project formatting was done subjectively (i.e. no particular criteria other than "it looks nice").
The second approach is to access the CellColor of the Cell object. This requires using foreground processing (i.e. working on the active display) by selecting task rows, field columns or individual cells.
<?xml version="1.0" encoding="utf-8"?>
I has removed this -- and it's worked for me
Okay according to [this answer](https://stackoverflow.com/a/75732843/1892584) to a question about fetching dependencies, you can add /json
Like so:
curl https://pypi.org/pypi/django/json
I remove div around gridview and use MaintainScrollOnPostback property for the page.
Looks like that fix the scroll
After doing some more research, I found that it would be smarter to use
Get-AzRecoveryServicesBackupJob
By:
Initializing a variable to equal a single vault:
$var = Get-AzRecoveryServicesVault -ResourceGroupName "\<your resource group name\>" -Name "\<your vault name\>"
And then calling Get-AzRecoveryServicesBackupJob with the corresponding Vault Id
$list = Get-AzRecoveryServicesBackupJob -VaultID $var.ID
To display, you'd just call the array
$list
and it would show you all of the completed, failed, and in progress backups
Thanks @Guillaume, but I think I found a better way that worked adding all the constraints. Here is what I did. Thanks for your effort though.
UPDATE ol
SET ol.RateTierIds = rpt_new.Id
FROM OrderLines ol
JOIN RatePlanTiers rpt_old ON ol.RateTierIds = rpt_old.Id
JOIN RatePlans rp_old ON rpt_old.RatePlanIDNumber = rp_old.Id AND rp_old.Name = 'Old Rate Plan'
JOIN RatePlans rp_new ON rp_new.Name = 'New Rate Plan'
JOIN RatePlanTiers rpt_new
ON rpt_new.RatePlanIDNumber = rp_new.Id
AND rpt_new.Price = rpt_old.Price
AND rpt_new.MinQty = rpt_old.MinQty
AND rpt_new.BundleId = rpt_old.BundleId;
Remember that Sops is used for highly sensitive data. If you want to use VS Code you should at least disable 3rd party plugins and CO-pilot. TL;DR:
SOPS_EDITOR="code --wait --new-window --disable-workspace-trust --disable-extensions --disable-telemetry" sops secrets/testing.yaml
I noticed myself when I was testing Sops file in VS Code with:
EDITOR="code --wait" sops secrets/testing.yaml
I had Github co-pilot turned on and noticed that it sends my unencrypted secrets in plaintext to remote server to get auto completes.
When I typed:
password: "correct
Github co-pilot suggested:
password: "correct horse battery staple"
Only adding --disable-extensions would probably be enough but at least for me this did not work if I didn't add --new-window as well.
Then noticed that VS Code actually has plenty of flags which are not documented available in their source code.
So --disable-workspace-trust and --disable-telemetry seemed useful too.
SOPS_EDITOR="code --wait --new-window --disable-workspace-trust --disable-extensions --disable-telemetry" sops secrets/testing.yaml
It's a good idea to add the SOPS_EDITOR as env in your shell configs so that you don't need to type it everytime.
This is great because all of us probably have some 3rd party VS Code extension installed which is suspicious and all of the extensions are disabled for the unencrypted sops file.
I noticed that it's also possible to disable co pilot by modifying your user profile settings in VS Code:
{
// We will use these custom file associations to disable co-pilot
// See more in: https://stackoverflow.com/a/77908836/1337062
"files.associations": {
// If repo would contain secrets in .env file it's better to ignore it
".env*": "plaintext",
// SOPS creates unencrypted temporary files here on MacOS
"/var/folders/*/**": "plaintext",
},
// This setting can't be altered here and
// needs to be copied directly into user settings
"github.copilot.enable": {
"*": true,
"plaintext": false,
},
}
I highly recommend adding the more strict SOPS_EDITOR env and the extra file associations which disable co-pilot for the plaintext temp files.
If you want to see how I added this into my git repo using elixir and nix with sops you can have a look at the linked commit.
Stay safe!
docker run -d -p 8081:8081 -e ME_CONFIG_MONGODB_ADMINUSERNAME=admin -e ME_CONFIG_MONGODB_ADMINPASSWORD=password -e ME_CONFIG_BASICAUTH_USERNAME=myuser -e ME_CONFIG_BASICAUTH_PASSWORD=mypassword -e ME_CONFIG_MONGODB_SERVER=mongo --net mongo-network --name mongo-express mongo-express
yes above command works for me when I enter the basicauth username and password
if requirement file not installing in deployment
or force install requirements.txt while deployment
set environment variable : SCM_DO_BUILD_DURING_DEPLOYMENT=true in web app portal.
Help Rebuild application while every re-deployment
import sklearn
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('https://github.com/RyanNolanData/YouTubeData/blob/main/500hits.csv', encoding='latin-1')
df.head()
With the new UI it was moved to the top bar (to the right of the project name).
Had rough time finding it myself...